• Vanishing Point . [Le Dot]

    From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Oct 5 00:51:50 2025
    From Newsgroup: comp.lang.prolog


    In a Tibetan lamasery, the monks seek to
    list all of the names of God. They believe
    the Universe was created for this purpose,
    and that once this naming is completed, God
    will bring the Universe to an end. Three

    centuries ago, the monks created an alphabet
    in which they calculated they could encode
    all the possible names of God, numbering
    about 9,000,000,000 ("nine billion") and
    each having no more than nine characters.

    riting the names out by hand, as they had
    been doing, even after eliminating various
    nonsense combinations, would take another
    15,000 years; the monks wish to use modern
    technology to finish this task in 100 days.

    They rent a computer capable of printing all
    the possible permutations, and hire two
    Westerners to install and program the machine.
    The computer operators are skeptical but
    play along. After three months, as the job

    nears completion, they fear that the monks
    will blame the computer (and, by extension,
    its operators) when nothing happens. The
    Westerners leave slightly earlier than their
    scheduled departure without warning the monks,

    so that it will complete its final print run
    shortly after they leave. On their way to the
    airfield they pause on the mountain path. Under
    a clear night sky they es timate that it must be
    just about the time that the monks are pasting

    the final printed names into their holy books.
    Then they notice that "overhead, without any
    fuss, the stars were going out."

    https://en.wikipedia.org/wiki/The_Nine_Billion_Names_of_God
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Oct 5 01:03:53 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Interestingly there is also now a top-ten
    for AI data centers, not only super computers.
    We are talking about newly built AI data centers

    that for the first time go into giga watts:

    The New World’s Largest AI Supercluster https://www.youtube.com/watch?v=RxuSvyOwVCI

    Some confirmed picks:

    xAI Colossus Memphis
    Phase 1 100000 xAI U.S. Confirmed
    Tesla Cortex Phase 1 50000 Tesla U.S. Confirmed
    Lawrence Livermore NL
    El Capitan Phase 2 44143 U.S. Department of
    Energy U.S. Confirmed
    Anonymized Chinese
    System 30000 N/A China Confirmed
    Meta GenAI 2024a 24576 Meta AI U.S. Confirmed
    Meta GenAI 2024b 24576 Meta AI U.S. Confirmed
    Jupiter, Jülich 23536 EuroHPC JU,
    Jülich Supercomputing
    Center Germany Confirmed
    Anonymized Chinese
    System 20000 N/A China Confirmed
    Anonymized Chinese
    System 20000 N/A China Confirmed

    https://www.visualcapitalist.com/the-worlds-most-powerful-ai-supercomputers/

    Bye

    Mild Shock schrieb:

    In a Tibetan lamasery, the monks seek to
    list all of the names of God. They believe
    the Universe was created for this purpose,
    and that once this naming is completed, God
    will bring the Universe to an end. Three

    centuries ago, the monks created an alphabet
    in which they calculated they could encode
    all the possible names of God, numbering
    about 9,000,000,000 ("nine billion") and
    each having no more than nine characters.

    riting the names out by hand, as they had
    been doing, even after eliminating various
    nonsense combinations, would take another
    15,000 years; the monks wish to use modern
    technology to finish this task in 100 days.

    They rent a computer capable of printing all
    the possible permutations, and hire two
    Westerners to install and program the machine.
    The computer operators are skeptical but
    play along. After three months, as the job

    nears completion, they fear that the monks
    will blame the computer (and, by extension,
    its operators) when nothing happens. The
    Westerners leave slightly earlier than their
    scheduled departure without warning the monks,

    so that it will complete its final print run
    shortly after they leave. On their way to the
    airfield they pause on the mountain path. Under
    a clear night sky they es timate that it must be
    just about the time that the monks are pasting

    the final printed names into their holy books.
    Then they notice that "overhead, without any
    fuss, the stars were going out."

    https://en.wikipedia.org/wiki/The_Nine_Billion_Names_of_God

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Oct 5 01:18:12 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    So the Tibetan lamasery had it right,
    the lights go out. There is a never ending
    hunger for crunch? Even requiring USA

    to go into domestic chip production:

    America’s Most Advanced Chip Factory Yet https://www.youtube.com/watch?v=1VX3jNJmbcI

    But chip production etc.. (embodied energy) is
    only a fraction of the energy consumption:

    Operational energy (running the data center
    for 1 year at 1 GW) = 8,760 TWh Hardware
    manufacturing energy = 0.75 TWh
    So the energy to produce the hardware is
    roughly 0.0085 (or 0.85%) of the annual
    operational energy.

    What are the projections? By 2030, global
    power demand for data centers is projected to
    reach approximately 220 GW, underscoring the

    urgency and strategic importance of securing
    power and hardware capacity:

    $100B Bet on 10GW AI Infrastructure https://51ai.substack.com/p/openai-nvidia-100b-bet-on-10gw-ai

    Bye

    Mild Shock schrieb:
    Hi,

    Interestingly there is also now a top-ten
    for AI data centers, not only super computers.
    We are talking about newly built AI data centers

    that for the first time go into giga watts:

    The New World’s Largest AI Supercluster https://www.youtube.com/watch?v=RxuSvyOwVCI

    Some confirmed picks:

    xAI Colossus Memphis
    Phase 1    100000    xAI    U.S.    Confirmed
    Tesla Cortex Phase 1    50000    Tesla    U.S.    Confirmed Lawrence Livermore NL
    El Capitan Phase 2    44143    U.S. Department of
    Energy    U.S.    Confirmed
    Anonymized Chinese
    System    30000    N/A    China    Confirmed
    Meta GenAI 2024a    24576    Meta AI    U.S.    Confirmed
    Meta GenAI 2024b    24576    Meta AI    U.S.    Confirmed Jupiter, Jülich    23536    EuroHPC JU,
    Jülich Supercomputing
    Center    Germany    Confirmed
    Anonymized Chinese
    System    20000    N/A    China    Confirmed
    Anonymized Chinese
    System    20000    N/A    China    Confirmed

    https://www.visualcapitalist.com/the-worlds-most-powerful-ai-supercomputers/


    Bye

    Mild Shock schrieb:

    In a Tibetan lamasery, the monks seek to
    list all of the names of God. They believe
    the Universe was created for this purpose,
    and that once this naming is completed, God
    will bring the Universe to an end. Three

    centuries ago, the monks created an alphabet
    in which they calculated they could encode
    all the possible names of God, numbering
    about 9,000,000,000 ("nine billion") and
    each having no more than nine characters.

    riting the names out by hand, as they had
    been doing, even after eliminating various
    nonsense combinations, would take another
    15,000 years; the monks wish to use modern
    technology to finish this task in 100 days.

    They rent a computer capable of printing all
    the possible permutations, and hire two
    Westerners to install and program the machine.
    The computer operators are skeptical but
    play along. After three months, as the job

    nears completion, they fear that the monks
    will blame the computer (and, by extension,
    its operators) when nothing happens. The
    Westerners leave slightly earlier than their
    scheduled departure without warning the monks,

    so that it will complete its final print run
    shortly after they leave. On their way to the
    airfield they pause on the mountain path. Under
    a clear night sky they es timate that it must be
    just about the time that the monks are pasting

    the final printed names into their holy books.
    Then they notice that "overhead, without any
    fuss, the stars were going out."

    https://en.wikipedia.org/wiki/The_Nine_Billion_Names_of_God


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Oct 5 10:32:41 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Funny video that uses KI generated content
    itself, to decry KI generated content. We
    live in a paradox world:

    "While we are still thinking about the
    possible social effects of artificial
    intelligence, the digital knowledge space
    is already drowning into synthetic trash.

    How could it get that far? How could the
    network, which had not been long ago as a
    place of free knowledge and the open exchange
    of information and entertainment, became
    mechanically manufactured nonsensively
    in record speed?"
    KI: Der Tod des Internets - ARTE
    https://www.youtube.com/watch?v=cGmVehWBdHI

    Word of the year 2025:

    "AI slop has been variously defined as "digital
    clutter", "filler content [prioritizing] speed
    and quantity over substance and quality",[6]
    and "shoddy or unwanted AI content in social
    media, art, books and [...] search results".
    AI slop
    https://en.wikipedia.org/wiki/AI_slop

    Poor Rossy Boy, now he is completely jobless.
    In the past he was the sole king of Human Slop.
    Now AI Slop thanks to large scale remix, stable

    diffusion and who knows what, is the new spam king.

    Bye

    Mild Shock schrieb:
    Hi,

    So the Tibetan lamasery had it right,
    the lights go out. There is a never ending
    hunger for crunch? Even requiring USA

    to go into domestic chip production:

    America’s Most Advanced Chip Factory Yet https://www.youtube.com/watch?v=1VX3jNJmbcI

    But chip production etc.. (embodied energy) is
    only a fraction of the energy consumption:

    Operational energy (running the data center
    for 1 year at 1 GW) = 8,760 TWh Hardware
    manufacturing energy = 0.75 TWh
    So the energy to produce the hardware is
    roughly 0.0085 (or 0.85%) of the annual
    operational energy.

    What are the projections? By 2030, global
    power demand for data centers is projected to
    reach approximately 220 GW, underscoring the

    urgency and strategic importance of securing
    power and hardware capacity:

    $100B Bet on 10GW AI Infrastructure https://51ai.substack.com/p/openai-nvidia-100b-bet-on-10gw-ai

    Bye

    Mild Shock schrieb:
    Hi,

    Interestingly there is also now a top-ten
    for AI data centers, not only super computers.
    We are talking about newly built AI data centers

    that for the first time go into giga watts:

    The New World’s Largest AI Supercluster
    https://www.youtube.com/watch?v=RxuSvyOwVCI

    Some confirmed picks:

    xAI Colossus Memphis
    Phase 1    100000    xAI    U.S.    Confirmed
    Tesla Cortex Phase 1    50000    Tesla    U.S.    Confirmed
    Lawrence Livermore NL
    El Capitan Phase 2    44143    U.S. Department of
    Energy    U.S.    Confirmed
    Anonymized Chinese
    System    30000    N/A    China    Confirmed
    Meta GenAI 2024a    24576    Meta AI    U.S.    Confirmed
    Meta GenAI 2024b    24576    Meta AI    U.S.    Confirmed
    Jupiter, Jülich    23536    EuroHPC JU,
    Jülich Supercomputing
    Center    Germany    Confirmed
    Anonymized Chinese
    System    20000    N/A    China    Confirmed
    Anonymized Chinese
    System    20000    N/A    China    Confirmed

    https://www.visualcapitalist.com/the-worlds-most-powerful-ai-supercomputers/


    Bye

    Mild Shock schrieb:

    In a Tibetan lamasery, the monks seek to
    list all of the names of God. They believe
    the Universe was created for this purpose,
    and that once this naming is completed, God
    will bring the Universe to an end. Three

    centuries ago, the monks created an alphabet
    in which they calculated they could encode
    all the possible names of God, numbering
    about 9,000,000,000 ("nine billion") and
    each having no more than nine characters.

    riting the names out by hand, as they had
    been doing, even after eliminating various
    nonsense combinations, would take another
    15,000 years; the monks wish to use modern
    technology to finish this task in 100 days.

    They rent a computer capable of printing all
    the possible permutations, and hire two
    Westerners to install and program the machine.
    The computer operators are skeptical but
    play along. After three months, as the job

    nears completion, they fear that the monks
    will blame the computer (and, by extension,
    its operators) when nothing happens. The
    Westerners leave slightly earlier than their
    scheduled departure without warning the monks,

    so that it will complete its final print run
    shortly after they leave. On their way to the
    airfield they pause on the mountain path. Under
    a clear night sky they es timate that it must be
    just about the time that the monks are pasting

    the final printed names into their holy books.
    Then they notice that "overhead, without any
    fuss, the stars were going out."

    https://en.wikipedia.org/wiki/The_Nine_Billion_Names_of_God



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Oct 5 19:40:11 2025
    From Newsgroup: comp.lang.prolog


    I dunno. Was trying to find another story,
    of a scientiest who studies the mind, and
    then goes slowly crazy when he discovers

    how the mind works. But what I posted
    is a **Plot Summmary** of a short story:

    "The Nine Billion Names of God" is a
    1953 science fiction short story by
    British writer Arthur C. Clarke. https://en.wikipedia.org/wiki/The_Nine_Billion_Names_of_God

    Maybe he used some existing Asian lore,
    I don't know. Who is an expert in this matter?

    WM schrieb:
    On 05.10.2025 00:52, Mild Shock wrote:

    In a Tibetan lamasery, the monks seek to
    list all of the names of God. They believe
    the Universe was created for this purpose,
    and that once this naming is completed, God
    will bring the Universe to an end. Three

    A very old story. I read it as a child.

    Regards, WM

    Mild Shock schrieb:

    In a Tibetan lamasery, the monks seek to
    list all of the names of God. They believe
    the Universe was created for this purpose,
    and that once this naming is completed, God
    will bring the Universe to an end. Three

    centuries ago, the monks created an alphabet
    in which they calculated they could encode
    all the possible names of God, numbering
    about 9,000,000,000 ("nine billion") and
    each having no more than nine characters.

    riting the names out by hand, as they had
    been doing, even after eliminating various
    nonsense combinations, would take another
    15,000 years; the monks wish to use modern
    technology to finish this task in 100 days.

    They rent a computer capable of printing all
    the possible permutations, and hire two
    Westerners to install and program the machine.
    The computer operators are skeptical but
    play along. After three months, as the job

    nears completion, they fear that the monks
    will blame the computer (and, by extension,
    its operators) when nothing happens. The
    Westerners leave slightly earlier than their
    scheduled departure without warning the monks,

    so that it will complete its final print run
    shortly after they leave. On their way to the
    airfield they pause on the mountain path. Under
    a clear night sky they es timate that it must be
    just about the time that the monks are pasting

    the final printed names into their holy books.
    Then they notice that "overhead, without any
    fuss, the stars were going out."

    https://en.wikipedia.org/wiki/The_Nine_Billion_Names_of_God

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Oct 7 19:45:30 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Try this server:

    https://mathstodon.xyz/about?lang=en

    It says:

    Mathstodon.xyz is a Mastodon instance for
    people who love maths. This instance is for
    people who love maths and we hope there’ll be
    lots of maths chat, but any topic of conversation
    following the code of conduct and the principle
    of getting along together is OK.

    It also says:

    Administered by:
    Christian Lawson-Perfect
    @christianp

    Bye

    P.S.: Sorry Julio Di Egidio, might push your Nazi
    Retard buttons again. Mastodon was created and is
    led by Eugen Rochko, a German software developer.

    Ross Finlayson schrieb:
    On 03/12/2024 10:08 AM, Ross Finlayson wrote:
    On 03/07/2024 08:09 AM, Ross Finlayson wrote:
    On 02/29/2024 07:55 PM, Ross Finlayson wrote:
    On 02/20/2024 07:47 PM, Ross Finlayson wrote:
    About a "dedicated little OS" to run a "dedicated little service".
    [... tons of gibberish ...]
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Oct 7 19:55:04 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Some statistics (2019 pre Corona):

    Number of instances by country:
    - Japan hosts ~ 25.5% of all Mastodon instances.
    - The United States hosts ~ 21.4% of instances.
    - France, Germany etc. also have significant shares,
    but less than those top two.
    Tweets are caled Toots:

    Users / Content concentration (“where the users / toots are”):
    - Even though Japan has ~25.5% of instances,
    it accounts for about 41% of users.
    - Many “toots” (posts) are concentrated in a few countries.
    For example, the top three countries host a majority of toots.

    See also:

    “Challenges in the Decentralised Web: The Mastodon Case.”
    De Domenico et al. (2019),
    https://www.researchgate.net/publication/336661890

    Bye

    Mild Shock schrieb:
    Hi,

    Try this server:

    https://mathstodon.xyz/about?lang=en

    It says:

    Mathstodon.xyz is a Mastodon instance for
    people who love maths. This instance is for
    people who love maths and we hope there’ll be
    lots of maths chat, but any topic of conversation
    following the code of conduct and the principle
    of getting along together is OK.

    It also says:

    Administered by:
    Christian Lawson-Perfect
    @christianp

    Bye

    P.S.: Sorry Julio Di Egidio, might push your Nazi
    Retard buttons again. Mastodon was created and is
    led by Eugen Rochko, a German software developer.

    Ross Finlayson schrieb:
    On 03/12/2024 10:08 AM, Ross Finlayson wrote:
    On 03/07/2024 08:09 AM, Ross Finlayson wrote:
    On 02/29/2024 07:55 PM, Ross Finlayson wrote:
    On 02/20/2024 07:47 PM, Ross Finlayson wrote:
    About a "dedicated little OS" to run a "dedicated little service".
    [... tons of gibberish ...]

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Nov 11 21:15:21 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Now that was a little unexpected that Intel beats AMD.
    Could get my hands on two AI Laptops for testing.
    The have both DDR5X Ram.

    AI Laptop 1: Acer_Swift_SFG16-61
    AMD Ryzen 7 350
    Radeon Graphics
    AMD Ryzen AI Boost, up to 50 TOPS

    AI Laptop 2: LENOVO_83JQ
    Intel Core Ultra 7 258V
    Intel Arc Graphics 140V
    Intel AI Boost, up to 47 TOPS

    I couldn't test the AI Boost yet. Besides the nice
    CPUs, the iGPU and the fast memory, in this case
    32 GB, seem to give quite some bang:

    Here some test with LMStudio, I picked the
    model Qwen3 Coder 30B, with Q4_K_M encoding,
    which is around 16 GB, and fits into my GPU memory: https://lmstudio.ai/models/qwen/qwen3-coder-30b

    AI Laptop 1: 18 Tokens/s

    AI Laptop 2: 6 Tokens/s

    Now I am waiting for drivers and applications
    that make use of the AI Boost. Will say
    I only tested the iGPU, not yet the NPU.

    For the donwload of the AI model it is
    required to have fiber and WiFi 6, you can
    then make it in less than a minute, even

    if you are far away from the router.

    Bye

    P.S.: The 18 Tokens/s are quite fluid,
    the 6 Tokens/s are a little annoying. But
    on both machines I could converse with the

    model, and go from primes to twin primes.
    It didn't use any RAG, fully produce via
    Local AI, also no remote AI access after

    download of the AI Model:

    ----------------- cut here ---------------------

    Here's a Prolog implementation to find
    twin prime pairs (pairs of primes that differ by 2):

    % Generate twin prime pairs up to N
    twin_primes_up_to(N, TwinPairs) :-
    findall((P, Q),
    (between(3, N, P),
    is_prime(P),
    Q is P + 2,
    is_prime(Q)),
    TwinPairs).

    % Check if a number is prime
    is_prime(2) :- !.
    is_prime(3) :- !.
    is_prime(N) :-
    N > 3,
    N mod 2 =\= 0, % Even numbers > 2 are not prime
    \+ has_factor(N, 3). % Check if N has any factors starting from 3

    % Check if N has a factor starting from F
    has_factor(N, F) :-
    F * F =< N,
    (N mod F =:= 0 ; has_factor(N, F + 2)).

    Example usage:

    ?- twin_primes_up_to(20, Pairs).
    Pairs = [(3, 5), (5, 7), (11, 13), (17, 19)].

    ?- twin_primes_up_to(50, Pairs).
    Pairs = [(3, 5), (5, 7), (11, 13), (17, 19), (29, 31), (41, 43)].

    This finds pairs of primes (P, Q) where Q = P + 2,
    which are called twin primes. The findall/3 predicate
    collects all such pairs where both P and P+2 are
    prime numbers and P is less than or equal to N.

    ----------------- cut here ---------------------

    Mild Shock schrieb:
    Hi,

    So the Tibetan lamasery had it right,
    the lights go out. There is a never ending
    hunger for crunch? Even requiring USA

    to go into domestic chip production:

    America’s Most Advanced Chip Factory Yet https://www.youtube.com/watch?v=1VX3jNJmbcI

    But chip production etc.. (embodied energy) is
    only a fraction of the energy consumption:

    Operational energy (running the data center
    for 1 year at 1 GW) = 8,760 TWh Hardware
    manufacturing energy = 0.75 TWh
    So the energy to produce the hardware is
    roughly 0.0085 (or 0.85%) of the annual
    operational energy.

    What are the projections? By 2030, global
    power demand for data centers is projected to
    reach approximately 220 GW, underscoring the

    urgency and strategic importance of securing
    power and hardware capacity:

    $100B Bet on 10GW AI Infrastructure https://51ai.substack.com/p/openai-nvidia-100b-bet-on-10gw-ai

    Bye

    Mild Shock schrieb:
    Hi,

    Interestingly there is also now a top-ten
    for AI data centers, not only super computers.
    We are talking about newly built AI data centers

    that for the first time go into giga watts:

    The New World’s Largest AI Supercluster
    https://www.youtube.com/watch?v=RxuSvyOwVCI

    Some confirmed picks:

    xAI Colossus Memphis
    Phase 1    100000    xAI    U.S.    Confirmed
    Tesla Cortex Phase 1    50000    Tesla    U.S.    Confirmed
    Lawrence Livermore NL
    El Capitan Phase 2    44143    U.S. Department of
    Energy    U.S.    Confirmed
    Anonymized Chinese
    System    30000    N/A    China    Confirmed
    Meta GenAI 2024a    24576    Meta AI    U.S.    Confirmed
    Meta GenAI 2024b    24576    Meta AI    U.S.    Confirmed
    Jupiter, Jülich    23536    EuroHPC JU,
    Jülich Supercomputing
    Center    Germany    Confirmed
    Anonymized Chinese
    System    20000    N/A    China    Confirmed
    Anonymized Chinese
    System    20000    N/A    China    Confirmed

    https://www.visualcapitalist.com/the-worlds-most-powerful-ai-supercomputers/


    Bye

    Mild Shock schrieb:

    In a Tibetan lamasery, the monks seek to
    list all of the names of God. They believe
    the Universe was created for this purpose,
    and that once this naming is completed, God
    will bring the Universe to an end. Three

    centuries ago, the monks created an alphabet
    in which they calculated they could encode
    all the possible names of God, numbering
    about 9,000,000,000 ("nine billion") and
    each having no more than nine characters.

    riting the names out by hand, as they had
    been doing, even after eliminating various
    nonsense combinations, would take another
    15,000 years; the monks wish to use modern
    technology to finish this task in 100 days.

    They rent a computer capable of printing all
    the possible permutations, and hire two
    Westerners to install and program the machine.
    The computer operators are skeptical but
    play along. After three months, as the job

    nears completion, they fear that the monks
    will blame the computer (and, by extension,
    its operators) when nothing happens. The
    Westerners leave slightly earlier than their
    scheduled departure without warning the monks,

    so that it will complete its final print run
    shortly after they leave. On their way to the
    airfield they pause on the mountain path. Under
    a clear night sky they es timate that it must be
    just about the time that the monks are pasting

    the final printed names into their holy books.
    Then they notice that "overhead, without any
    fuss, the stars were going out."

    https://en.wikipedia.org/wiki/The_Nine_Billion_Names_of_God



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Nov 11 23:28:26 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    There is even more joy of running an AI Model
    on your AI Laptop. All you need to put "AI Slob"
    into the wild is Stable Diffusion Software and Models.

    Cute Racoon generated with Local AI
    https://www.youtube.com/shorts/QlS0MqMh_bM

    Have Fun!

    Bye

    P.S.: Was using this gadget:

    Create AI art, with the latest Stable
    Diffusion models, on your own PC.
    https://www.amuse-ai.com/

    Mild Shock schrieb:
    Hi,

    Now that was a little unexpected that Intel beats AMD.
    Could get my hands on two AI Laptops for testing.
    The have both DDR5X Ram.

    AI Laptop 1: Acer_Swift_SFG16-61
    AMD Ryzen 7 350
    Radeon Graphics
    AMD Ryzen AI Boost, up to 50 TOPS

    AI Laptop 2: LENOVO_83JQ
    Intel Core Ultra 7 258V
    Intel Arc Graphics 140V
    Intel AI Boost, up to 47 TOPS

    I couldn't test the AI Boost yet. Besides the nice
    CPUs, the iGPU and the fast memory, in this case
    32 GB, seem to give quite some bang:

    Here some test with LMStudio, I picked the
    model Qwen3 Coder 30B, with Q4_K_M encoding,
    which is around 16 GB, and fits into my GPU memory: https://lmstudio.ai/models/qwen/qwen3-coder-30b

    AI Laptop 1: 18 Tokens/s

    AI Laptop 2: 6 Tokens/s

    Now I am waiting for drivers and applications
    that make use of the AI Boost. Will say
    I only tested the iGPU, not yet the NPU.

    For the donwload of the AI model it is
    required to have fiber and WiFi 6, you can
    then make it in less than a minute, even

    if you are far away from the router.

    Bye

    P.S.: The 18 Tokens/s are quite fluid,
    the 6 Tokens/s are a little annoying. But
    on both machines I could converse with the

    model, and go from primes to twin primes.
    It didn't use any RAG, fully produce via
    Local AI, also no remote AI access after

    download of the AI Model:

    ----------------- cut here ---------------------

    Here's a Prolog implementation to find
    twin prime pairs (pairs of primes that differ by 2):

    % Generate twin prime pairs up to N
    twin_primes_up_to(N, TwinPairs) :-
        findall((P, Q),
                (between(3, N, P),
                 is_prime(P),
                 Q is P + 2,
                 is_prime(Q)),
                TwinPairs).

    % Check if a number is prime
    is_prime(2) :- !.
    is_prime(3) :- !.
    is_prime(N) :-
        N > 3,
        N mod 2 =\= 0,          % Even numbers > 2 are not prime
        \+ has_factor(N, 3).    % Check if N has any factors starting from 3

    % Check if N has a factor starting from F
    has_factor(N, F) :-
        F * F =< N,
        (N mod F =:= 0 ; has_factor(N, F + 2)).

    Example usage:

    ?- twin_primes_up_to(20, Pairs).
    Pairs = [(3, 5), (5, 7), (11, 13), (17, 19)].

    ?- twin_primes_up_to(50, Pairs).
    Pairs = [(3, 5), (5, 7), (11, 13), (17, 19), (29, 31), (41, 43)].

    This finds pairs of primes (P, Q) where Q = P + 2,
    which are called twin primes. The findall/3 predicate
    collects all such pairs where both P and P+2 are
    prime numbers and P is less than or equal to N.

    ----------------- cut here ---------------------
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Nov 11 23:36:24 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Come on old boy, why not try Local AI?
    Elaborate your paleo stone throwing theory
    with a Local AI (An AI Laptop and AI Models).

    Actually I think they should train an AI
    model on Archimedes Plutonium. And then
    use it as a USENET AI Slob geberator.

    An AI Slob generator that will be fed
    with the newest news about Trump, Putin,
    Math, etc.. in its conext window,

    Instructed do give AP style reflections.

    LoL

    Bye

    Ross Finlayson schrieb:
    The status of excluded middle vs. inverse
    The "sufficient and thorough reason" vs. traditional PSR
    The "paleo-classical post-modern" character
    The synthesis of traditions (Pythagorean, Cantorian, etc.)
    The de-fragmentation claim
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Nov 13 17:51:48 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    If you take identity as E-mail address, which you
    should see with a suitable news reader. I don't
    use randomly generated E-mail adresses, or invalid

    E-mail address. Its always the same E-mail adress.
    At the time of Google+ it was maybe a different E-mail
    address to access Google+. So its always the same

    "identity". What changes is my nickname sometimes.
    But not from post to post, I am not a nickname shapeshifter,
    I only change it every decade or so. So you can take

    your false accusations, and shove it up your ass.

    Fucking paranoid moron you are.

    Bye

    Ross Finlayson schrieb:
    Then, since for example there's nothing funny about it,
    I'm not interested in your plan of mal-journalism.
    Furthermore all your sockpuppets are attached to
    your identity.

    Mild Shock schrieb:
    Hi,

    Come on old boy, why not try Local AI?
    Elaborate your paleo stone throwing theory
    with a Local AI (An AI Laptop and AI Models).

    Actually I think they should train an AI
    model on Archimedes Plutonium. And then
    use it as a USENET AI Slob geberator.

    An AI Slob generator that will be fed
    with the newest news about Trump, Putin,
    Math, etc.. in its conext window,

    Instructed do give AP style reflections.

    LoL

    Bye

    Ross Finlayson schrieb:
    The status of excluded middle vs. inverse
    The "sufficient and thorough reason" vs. traditional PSR
    The "paleo-classical post-modern" character
    The synthesis of traditions (Pythagorean, Cantorian, etc.)
    The de-fragmentation claim

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Nov 13 17:54:22 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    There are different ways to make an AI Persona.
    For example Twitter aka X, has a grok account.
    But grok is not trimmed towards a particular

    profile, maybe an alter ego of Elon Musk was
    speculated. What could be interesting would be
    to go towards character.ai :

    "Character.ai was established in November 2021.
    The company's co-founders, Noam Shazeer and
    Daniel de Freitas, were both engineers from
    Google. They both worked on AI-related projects:
    Shazeer was a lead author on a paper that
    Business Insider reported in April 2023 "has
    been widely cited as key to today's chatbots",
    and Freitas was the lead designer of an
    experimental AI at Google initially called Meena,
    which later became known as LaMDA.
    https://en.wikipedia.org/wiki/Character.ai

    Maybe this can be even done with Local AI. By for
    example populating a context. Or creating a kind
    of LangChain memory. ChatGPT calls it persistent context.

    So yes you could "spam" an AI Archimedes Plutonium
    probably from your desktop with an AI Laptop, maybe
    already in 2025, or later in 2026.

    Bye

    Mild Shock schrieb:
    Hi,

    If you take identity as E-mail address, which you
    should see with a suitable news reader. I don't
    use randomly generated E-mail adresses, or invalid

    E-mail address. Its always the same E-mail adress.
    At the time of Google+ it was maybe a different E-mail
    address to access Google+. So its always the same

    "identity". What changes is my nickname sometimes.
    But not from post to post, I am not a nickname shapeshifter,
    I only change it every decade or so. So you can take

    your false accusations, and shove it up your ass.

    Fucking paranoid moron you are.

    Bye

    Ross Finlayson schrieb:
    Then, since for example there's nothing funny about it,
    I'm not interested in your plan of mal-journalism.
    Furthermore all your sockpuppets are attached to
    your identity.

    Mild Shock schrieb:
    Hi,

    Come on old boy, why not try Local AI?
    Elaborate your paleo stone throwing theory
    with a Local AI (An AI Laptop and AI Models).

    Actually I think they should train an AI
    model on Archimedes Plutonium. And then
    use it as a USENET AI Slob geberator.

    An AI Slob generator that will be fed
    with the newest news about Trump, Putin,
    Math, etc.. in its conext window,

    Instructed do give AP style reflections.

    LoL

    Bye

    Ross Finlayson schrieb:
    The status of excluded middle vs. inverse
    The "sufficient and thorough reason" vs. traditional PSR
    The "paleo-classical post-modern" character
    The synthesis of traditions (Pythagorean, Cantorian, etc.)
    The de-fragmentation claim


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Nov 13 18:01:28 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    With RAG it could react on news. RAG (Retrieval-
    Augmented Generation) and other techniques can
    make a local AI evolve dynamically — both in knowledge

    and personality. Lets say I put such a AI Persona in
    a News Group loop. I could feedback the posts there,
    to make it also evolve. One can collect community

    reactions (upvotes, replies, sentiment). Or apply automatic
    evaluation — e.g., sentiment analysis or engagement
    score. This becomes your fitness function.

    You can even run multiple personas in the same loop
    Different viewpoints (e.g., “optimist”, “skeptic”,
    “data scientist”) They interact and debate posts.

    This is ideal for dead open source projects, such as
    SWI-Prolog, to pretend user participation, very important
    in our new marketing world that is totally

    engagement focus. The quality(*) of content doesn't count.

    LoL

    Bye

    (*) Same for research paper mills. BTW, ChatGPT suggests
    me the following tech stack:

    LLM: Ollama (LLaMA 3, Mistral, Gemma, etc.)
    LangChain / LlamaIndex: context + memory
    ChromaDB / SQLite: memory store
    Feedparser / API client: ingest news or group posts
    Transformers / VADER: sentiment scoring
    Simple scheduler or cron job: to run the loop daily

    Mild Shock schrieb:

    I dunno. Was trying to find another story,
    of a scientiest who studies the mind, and
    then goes slowly crazy when he discovers

    how the mind works. But what I posted
    is a **Plot Summmary** of a short story:

    "The Nine Billion Names of God" is a
    1953 science fiction short story by
    British writer Arthur C. Clarke. https://en.wikipedia.org/wiki/The_Nine_Billion_Names_of_God

    Maybe he used some existing Asian lore,
    I don't know. Who is an expert in this matter?

    WM schrieb:
    On 05.10.2025 00:52, Mild Shock wrote:

    In a Tibetan lamasery, the monks seek to
    list all of the names of God. They believe
    the Universe was created for this purpose,
    and that once this naming is completed, God
    will bring the Universe to an end. Three

    A very old story. I read it as a child.

    Regards, WM

    Mild Shock schrieb:

    In a Tibetan lamasery, the monks seek to
    list all of the names of God. They believe
    the Universe was created for this purpose,
    and that once this naming is completed, God
    will bring the Universe to an end. Three

    centuries ago, the monks created an alphabet
    in which they calculated they could encode
    all the possible names of God, numbering
    about 9,000,000,000 ("nine billion") and
    each having no more than nine characters.

    riting the names out by hand, as they had
    been doing, even after eliminating various
    nonsense combinations, would take another
    15,000 years; the monks wish to use modern
    technology to finish this task in 100 days.

    They rent a computer capable of printing all
    the possible permutations, and hire two
    Westerners to install and program the machine.
    The computer operators are skeptical but
    play along. After three months, as the job

    nears completion, they fear that the monks
    will blame the computer (and, by extension,
    its operators) when nothing happens. The
    Westerners leave slightly earlier than their
    scheduled departure without warning the monks,

    so that it will complete its final print run
    shortly after they leave. On their way to the
    airfield they pause on the mountain path. Under
    a clear night sky they es timate that it must be
    just about the time that the monks are pasting

    the final printed names into their holy books.
    Then they notice that "overhead, without any
    fuss, the stars were going out."

    https://en.wikipedia.org/wiki/The_Nine_Billion_Names_of_God


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Nov 13 18:08:54 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    My favorite AI personas would be:

    - Boris the Loris
    Agent provocateur that keeps the quality
    of posts low, only dumb open source consumers
    allowed, no Fuzzy Logic, etc..

    - Nazi-Retard Julio
    Agent provocateur that pretends to know
    some subject, but doesn't know it at all,
    this is to compensate for the dump open source
    consumers, that populate the group now.

    Bye

    Mild Shock schrieb:
    Hi,

    With RAG it could react on news. RAG (Retrieval-
    Augmented Generation) and other techniques can
    make a local AI evolve dynamically — both in knowledge

    and personality. Lets say I put such a AI Persona in
    a News Group loop. I could feedback the posts there,
    to make it also evolve. One can collect community

    reactions (upvotes, replies, sentiment). Or apply automatic
    evaluation — e.g., sentiment analysis or engagement
    score. This becomes your fitness function.

    You can even run multiple personas in the same loop
    Different viewpoints (e.g., “optimist”, “skeptic”,
    “data scientist”) They interact and debate posts.

    This is ideal for dead open source projects, such as
    SWI-Prolog, to pretend user participation, very important
    in our new marketing world that is totally

    engagement focus. The quality(*) of content doesn't count.

    LoL

    Bye

    (*) Same for research paper mills. BTW, ChatGPT suggests
    me the following tech stack:

    LLM: Ollama (LLaMA 3, Mistral, Gemma, etc.)
    LangChain / LlamaIndex: context + memory
    ChromaDB / SQLite: memory store
    Feedparser / API client: ingest news or group posts
    Transformers / VADER: sentiment scoring
    Simple scheduler or cron job: to run the loop daily

    Mild Shock schrieb:

    I dunno. Was trying to find another story,
    of a scientiest who studies the mind, and
    then goes slowly crazy when he discovers

    how the mind works. But what I posted
    is a **Plot Summmary** of a short story:

    "The Nine Billion Names of God" is a
    1953 science fiction short story by
    British writer Arthur C. Clarke.
    https://en.wikipedia.org/wiki/The_Nine_Billion_Names_of_God

    Maybe he used some existing Asian lore,
    I don't know. Who is an expert in this matter?

    WM schrieb:
    On 05.10.2025 00:52, Mild Shock wrote:
    ;
    In a Tibetan lamasery, the monks seek to
    list all of the names of God. They believe
    the Universe was created for this purpose,
    and that once this naming is completed, God
    will bring the Universe to an end. Three
    ;
    A very old story. I read it as a child.
    ;
    Regards, WM

    Mild Shock schrieb:

    In a Tibetan lamasery, the monks seek to
    list all of the names of God. They believe
    the Universe was created for this purpose,
    and that once this naming is completed, God
    will bring the Universe to an end. Three

    centuries ago, the monks created an alphabet
    in which they calculated they could encode
    all the possible names of God, numbering
    about 9,000,000,000 ("nine billion") and
    each having no more than nine characters.

    riting the names out by hand, as they had
    been doing, even after eliminating various
    nonsense combinations, would take another
    15,000 years; the monks wish to use modern
    technology to finish this task in 100 days.

    They rent a computer capable of printing all
    the possible permutations, and hire two
    Westerners to install and program the machine.
    The computer operators are skeptical but
    play along. After three months, as the job

    nears completion, they fear that the monks
    will blame the computer (and, by extension,
    its operators) when nothing happens. The
    Westerners leave slightly earlier than their
    scheduled departure without warning the monks,

    so that it will complete its final print run
    shortly after they leave. On their way to the
    airfield they pause on the mountain path. Under
    a clear night sky they es timate that it must be
    just about the time that the monks are pasting

    the final printed names into their holy books.
    Then they notice that "overhead, without any
    fuss, the stars were going out."

    https://en.wikipedia.org/wiki/The_Nine_Billion_Names_of_God



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Nov 13 21:53:10 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Now I have the feeling we are minutes away
    from robotic AI. Wondering why all the AI Laptops
    have now GPUs with ray tracing hardware.

    Combining LRM (Large Reasoning Models) with a
    3D-Worlds modality could do the job. One could
    use genetic algorithms to produce synthetic

    training data. This might challenge the
    connotation behind the holy grail of AI,
    called "Embodiment":

    "Der Schweizer Informatiker und Robotik-Experte
    Rolf Pfeifer nimmt in diesem Kontext den
    Standpunkt ein, dass Intelligenz ausschließlich
    verkörperten Agenten, d. h. realen physischen
    Systemen, deren Verhalten in der Interaktion
    mit der Umwelt beobachtbar ist, zugeschrieben
    werden kann.",
    - How the body shapes the way we think. A new
    view of intelligence, Pfeifer & Bongard, 2007

    Bye

    Mild Shock schrieb:

    In a Tibetan lamasery, the monks seek to
    list all of the names of God. They believe
    the Universe was created for this purpose,
    and that once this naming is completed, God
    will bring the Universe to an end. Three

    centuries ago, the monks created an alphabet
    in which they calculated they could encode
    all the possible names of God, numbering
    about 9,000,000,000 ("nine billion") and
    each having no more than nine characters.

    riting the names out by hand, as they had
    been doing, even after eliminating various
    nonsense combinations, would take another
    15,000 years; the monks wish to use modern
    technology to finish this task in 100 days.

    They rent a computer capable of printing all
    the possible permutations, and hire two
    Westerners to install and program the machine.
    The computer operators are skeptical but
    play along. After three months, as the job

    nears completion, they fear that the monks
    will blame the computer (and, by extension,
    its operators) when nothing happens. The
    Westerners leave slightly earlier than their
    scheduled departure without warning the monks,

    so that it will complete its final print run
    shortly after they leave. On their way to the
    airfield they pause on the mountain path. Under
    a clear night sky they es timate that it must be
    just about the time that the monks are pasting

    the final printed names into their holy books.
    Then they notice that "overhead, without any
    fuss, the stars were going out."

    https://en.wikipedia.org/wiki/The_Nine_Billion_Names_of_God

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Nov 13 21:58:00 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    There are quite some signs of LWM (Large World
    Model) AI on the horizon. Like for example:

    Controllable World Models are HERE
    Explore a playable world model, Marble, from World Labs,
    now available for free. This multimodal AI generates 3D
    environments from various inputs, including text and images.
    Users can navigate, edit, and export these worlds
    in multiple formats.
    https://www.youtube.com/watch?v=-QbM6_BSdS0

    Grok 5 by Elon Musk — AGI Is Closer Than You Think
    Musk's claim that Grok 5 has a "10% and rising" chance
    of achieving AGI demands scrutiny. He's gone further,
    stating "Grok 5 will be AGI or something indistinguishable
    from AGI"—a prediction that would make it the first
    system to achieve human-level general intelligence. https://www.youtube.com/watch?v=Afsl4qUsfdw

    Bye

    Mild Shock schrieb:
    Hi,

    Now I have the feeling we are minutes away
    from robotic AI. Wondering why all the AI Laptops
    have now GPUs with ray tracing hardware.

    Combining LRM (Large Reasoning Models) with a
    3D-Worlds modality could do the job. One could
    use genetic algorithms to produce synthetic

    training data. This might challenge the
    connotation behind the holy grail of AI,
    called "Embodiment":

    "Der Schweizer Informatiker und Robotik-Experte
    Rolf Pfeifer nimmt in diesem Kontext den
    Standpunkt ein, dass Intelligenz ausschließlich
    verkörperten Agenten, d. h. realen physischen
    Systemen, deren Verhalten in der Interaktion
    mit der Umwelt beobachtbar ist, zugeschrieben
    werden kann.",
    - How the body shapes the way we think. A new
      view of intelligence, Pfeifer & Bongard, 2007

    Bye

    Mild Shock schrieb:

    In a Tibetan lamasery, the monks seek to
    list all of the names of God. They believe
    the Universe was created for this purpose,
    and that once this naming is completed, God
    will bring the Universe to an end. Three

    centuries ago, the monks created an alphabet
    in which they calculated they could encode
    all the possible names of God, numbering
    about 9,000,000,000 ("nine billion") and
    each having no more than nine characters.

    riting the names out by hand, as they had
    been doing, even after eliminating various
    nonsense combinations, would take another
    15,000 years; the monks wish to use modern
    technology to finish this task in 100 days.

    They rent a computer capable of printing all
    the possible permutations, and hire two
    Westerners to install and program the machine.
    The computer operators are skeptical but
    play along. After three months, as the job

    nears completion, they fear that the monks
    will blame the computer (and, by extension,
    its operators) when nothing happens. The
    Westerners leave slightly earlier than their
    scheduled departure without warning the monks,

    so that it will complete its final print run
    shortly after they leave. On their way to the
    airfield they pause on the mountain path. Under
    a clear night sky they es timate that it must be
    just about the time that the monks are pasting

    the final printed names into their holy books.
    Then they notice that "overhead, without any
    fuss, the stars were going out."

    https://en.wikipedia.org/wiki/The_Nine_Billion_Names_of_God


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Nov 13 23:05:04 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    So the "Amuse" App on my AI Laptops is full of Stable
    Diffusion with Control Net. Robots may use Stable
    Diffusion with Control Net. Like in this GitHub project:

    Adding Conditional Control to Text-to-Image Diffusion Models https://github.com/lllyasviel/ControlNet

    The idea is already being used to control robots in new
    research. Like Diffusion Policies (Stanford, 2023–2024),
    Robots use a diffusion model to generate action

    trajectories instead of images. Trajectory Diffusion, QDiffusion,
    SE(3) Diffusion, UniDexGrasp, GenAug, etc... The field is exploding.
    many labs have shown diffusion models beating RL in:

    - grasping
    - manipulation
    - locomotion
    - navigation

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    There are quite some signs of LWM (Large World
    Model) AI on the horizon. Like for example:

    Controllable World Models are HERE
    Explore a playable world model, Marble, from World Labs,
    now available for free. This multimodal AI generates 3D
    environments from various inputs, including text and images.
    Users can navigate, edit, and export these worlds
    in multiple formats.
    https://www.youtube.com/watch?v=-QbM6_BSdS0

    Grok 5 by Elon Musk — AGI Is Closer Than You Think
    Musk's claim that Grok 5 has a "10% and rising" chance
    of achieving AGI demands scrutiny. He's gone further,
    stating "Grok 5 will be AGI or something indistinguishable
    from AGI"—a prediction that would make it the first
    system to achieve human-level general intelligence. https://www.youtube.com/watch?v=Afsl4qUsfdw

    Bye

    Mild Shock schrieb:
    Hi,

    Now I have the feeling we are minutes away
    from robotic AI. Wondering why all the AI Laptops
    have now GPUs with ray tracing hardware.

    Combining LRM (Large Reasoning Models) with a
    3D-Worlds modality could do the job. One could
    use genetic algorithms to produce synthetic

    training data. This might challenge the
    connotation behind the holy grail of AI,
    called "Embodiment":

    "Der Schweizer Informatiker und Robotik-Experte
    Rolf Pfeifer nimmt in diesem Kontext den
    Standpunkt ein, dass Intelligenz ausschließlich
    verkörperten Agenten, d. h. realen physischen
    Systemen, deren Verhalten in der Interaktion
    mit der Umwelt beobachtbar ist, zugeschrieben
    werden kann.",
    - How the body shapes the way we think. A new
       view of intelligence, Pfeifer & Bongard, 2007

    Bye

    Mild Shock schrieb:

    In a Tibetan lamasery, the monks seek to
    list all of the names of God. They believe
    the Universe was created for this purpose,
    and that once this naming is completed, God
    will bring the Universe to an end. Three

    centuries ago, the monks created an alphabet
    in which they calculated they could encode
    all the possible names of God, numbering
    about 9,000,000,000 ("nine billion") and
    each having no more than nine characters.

    riting the names out by hand, as they had
    been doing, even after eliminating various
    nonsense combinations, would take another
    15,000 years; the monks wish to use modern
    technology to finish this task in 100 days.

    They rent a computer capable of printing all
    the possible permutations, and hire two
    Westerners to install and program the machine.
    The computer operators are skeptical but
    play along. After three months, as the job

    nears completion, they fear that the monks
    will blame the computer (and, by extension,
    its operators) when nothing happens. The
    Westerners leave slightly earlier than their
    scheduled departure without warning the monks,

    so that it will complete its final print run
    shortly after they leave. On their way to the
    airfield they pause on the mountain path. Under
    a clear night sky they es timate that it must be
    just about the time that the monks are pasting

    the final printed names into their holy books.
    Then they notice that "overhead, without any
    fuss, the stars were going out."

    https://en.wikipedia.org/wiki/The_Nine_Billion_Names_of_God



    --- Synchronet 3.21a-Linux NewsLink 1.2