• What is analog computing nowadays? (Re: An old Busy Beaver ASIC(Application-Specific Integrated Circuit) (Was: Could AlphaEvolve find thesixth busy beaver ?)

    From Mild Shock@janburse@fastmail.fm to sci.physics,sci.physics.relativity,comp.lang.prolog on Mon Dec 1 11:25:35 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    1) Classical computing = Boolean logic + von Neumann architecture

    For decades, all mainstream computation was built on:
    Boolean algebra
    Logic gates
    Scalar operations executed sequentially
    Memory and compute as separate blocks
    Even floating-point arithmetic was implemented on top of Boolean logic.

    This shaped how programmers think — algorithms expressed
    as symbolic operations, control flow, and discrete steps.

    2) AI accelerators break from that model

    Modern accelerators — GPUs, TPUs, NPUs, and custom matrix
    engines — use a different computational substrate:

    Instead of Boolean logic:
    → Bulk linear algebra over vectors/tensors

    Instead of instruction-by-instruction control:
    → Dataflow graphs

    Instead of sequential compute on registers:
    → Massively parallel fused-multiply-add units

    Instead of manually orchestrated loops:
    → High-level declarative specs (XLA, MLIR, TVM)

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    Wonder why the Coq proof even should be
    different from anything that AI could produce.
    Its not a typical Euclid proof in a few steps,

    it rather uses also enumeration, just like the
    Fly Speck proof, for the Keppler Conjecture. So
    lets see what happens next, could AlphaEvolve

    find the sixth busy beaver?

    Bye

    P.S.: Here picture of an old Busy Beaver ASIC
    (Application-Specific Integrated Circuit)

    Application    Fun
    Technology    1500
    Manufacturer    VLSI Tech
    Type    Semester Thesis
    Package    DIP64
    Dimensions    3200μm x 3200μm
    Gates    2 kGE
    Voltage    5 V
    Clock    20 MHz

    The Busy Beaver Coprocessor has been designed to solve the Busy Beaver Function for 5 states. This function (also known as the Rado's Sigma Function) is an uncomputable problem from information theory. The input argument is a natural number 'n' that represents the complexity of an algorithm described as a Turing Machine. http://asic.ethz.ch/cg/1990/Busy_Beaver.html

    Mild Shock schrieb:
    Hi,

    What we thought:

    Prediction 5 . It will never be proved that
    Σ(5) = 4,098 and S(5) = 47,176,870.
    -- Allen H. Brady, 1990  .

    How it started:

    To investigate AlphaEvolve’s breadth, we applied
    the system to over 50 open problems in mathematical
    analysis, geometry, combinatorics and number theory.
    The system’s flexibility enabled us to set up most
    experiments in a matter of hours. In roughly 75% of
    cases, it rediscovered state-of-the-art solutions, to
    the best of our knowledge.
    https://deepmind.google/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/


    How its going:

    We prove that S(5) = 47, 176, 870 using the Coq proof
    assistant. The Busy Beaver value S(n) is the maximum
    number of steps that an n-state 2-symbol Turing machine
    can perform from the all-zero tape before halting, and
    S was historically introduced by Tibor Radó in 1962 as
    one of the simplest examples of an uncomputable function.
    The proof enumerates 181,385,789 Turing machines with 5
    states and, for each machine, decides whether it halts or
    not. Our result marks the first determination of a new
    Busy Beaver value in over 40 years and the first Busy
    Beaver value ever to be formally verified, attesting to the
    effectiveness of massively collaborative online research
    https://arxiv.org/pdf/2509.12337

    They claim not having used much AI. But could for
    example AlphaEvolve do it somehow nevertheless, more or
    less autonomously, and find the sixth busy beaver?

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.physics,sci.physics.relativity,comp.lang.prolog on Mon Dec 1 12:01:39 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I am doing the wake-up call until everybody
    gets ear-bleeding. It just too cringe to
    see the symbolics computing morons struggle

    with connectionism. But given that humans
    have a brain with neurons, it should be obvious
    that symbolism and connectionism are just two

    sides of the same coin.

    Good Luck!

    Bye

    Mild Shock schrieb:
    Hi,

    1) Classical computing = Boolean logic + von Neumann architecture

    For decades, all mainstream computation was built on:
    Boolean algebra
    Logic gates
    Scalar operations executed sequentially
    Memory and compute as separate blocks
    Even floating-point arithmetic was implemented on top of Boolean logic.

    This shaped how programmers think — algorithms expressed
    as symbolic operations, control flow, and discrete steps.

    2) AI accelerators break from that model

    Modern accelerators — GPUs, TPUs, NPUs, and custom matrix
    engines — use a different computational substrate:

    Instead of Boolean logic:
    → Bulk linear algebra over vectors/tensors

    Instead of instruction-by-instruction control:
    → Dataflow graphs

    Instead of sequential compute on registers:
    → Massively parallel fused-multiply-add units

    Instead of manually orchestrated loops:
    → High-level declarative specs (XLA, MLIR, TVM)

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    Wonder why the Coq proof even should be
    different from anything that AI could produce.
    Its not a typical Euclid proof in a few steps,

    it rather uses also enumeration, just like the
    Fly Speck proof, for the Keppler Conjecture. So
    lets see what happens next, could AlphaEvolve

    find the sixth busy beaver?

    Bye

    P.S.: Here picture of an old Busy Beaver ASIC
    (Application-Specific Integrated Circuit)

    Application    Fun
    Technology    1500
    Manufacturer    VLSI Tech
    Type    Semester Thesis
    Package    DIP64
    Dimensions    3200μm x 3200μm
    Gates    2 kGE
    Voltage    5 V
    Clock    20 MHz

    The Busy Beaver Coprocessor has been designed to solve the Busy Beaver
    Function for 5 states. This function (also known as the Rado's Sigma
    Function) is an uncomputable problem from information theory. The
    input argument is a natural number 'n' that represents the complexity
    of an algorithm described as a Turing Machine.
    http://asic.ethz.ch/cg/1990/Busy_Beaver.html

    Mild Shock schrieb:
    Hi,

    What we thought:

    Prediction 5 . It will never be proved that
    Σ(5) = 4,098 and S(5) = 47,176,870.
    -- Allen H. Brady, 1990  .

    How it started:

    To investigate AlphaEvolve’s breadth, we applied
    the system to over 50 open problems in mathematical
    analysis, geometry, combinatorics and number theory.
    The system’s flexibility enabled us to set up most
    experiments in a matter of hours. In roughly 75% of
    cases, it rediscovered state-of-the-art solutions, to
    the best of our knowledge.
    https://deepmind.google/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/


    How its going:

    We prove that S(5) = 47, 176, 870 using the Coq proof
    assistant. The Busy Beaver value S(n) is the maximum
    number of steps that an n-state 2-symbol Turing machine
    can perform from the all-zero tape before halting, and
    S was historically introduced by Tibor Radó in 1962 as
    one of the simplest examples of an uncomputable function.
    The proof enumerates 181,385,789 Turing machines with 5
    states and, for each machine, decides whether it halts or
    not. Our result marks the first determination of a new
    Busy Beaver value in over 40 years and the first Busy
    Beaver value ever to be formally verified, attesting to the
    effectiveness of massively collaborative online research
    https://arxiv.org/pdf/2509.12337

    They claim not having used much AI. But could for
    example AlphaEvolve do it somehow nevertheless, more or
    less autonomously, and find the sixth busy beaver?

    Bye



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.physics,sci.physics.relativity,comp.lang.prolog on Mon Dec 1 12:07:48 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Quizz: How much neurons are necessary in the
    head of turning machine, to simulate ZFC?

    You have possibly to look up some modelling
    of the logic of ZFC by Bernays. Don't know the

    details but maybe check out:

    The Undecidability of BB(748)
    Understanding Godels Incompleteness Theorems
    Johannes Riebel - March 2023 https://www.ingo-blechschmidt.eu/assets/bachelor-thesis-undecidability-bb748.pdf

    Bye

    Mild Shock schrieb:
    Hi,

    I am doing the wake-up call until everybody
    gets ear-bleeding. It just too cringe to
    see the symbolics computing morons struggle

    with connectionism. But given that humans
    have a brain with neurons, it should be obvious
    that symbolism and connectionism are just two

    sides of the same coin.

    Good Luck!

    Bye

    Mild Shock schrieb:
    Hi,

    1) Classical computing = Boolean logic + von Neumann architecture

    For decades, all mainstream computation was built on:
    Boolean algebra
    Logic gates
    Scalar operations executed sequentially
    Memory and compute as separate blocks
    Even floating-point arithmetic was implemented on top of Boolean logic.

    This shaped how programmers think — algorithms expressed
    as symbolic operations, control flow, and discrete steps.

    2) AI accelerators break from that model

    Modern accelerators — GPUs, TPUs, NPUs, and custom matrix
    engines — use a different computational substrate:

    Instead of Boolean logic:
    → Bulk linear algebra over vectors/tensors

    Instead of instruction-by-instruction control:
    → Dataflow graphs

    Instead of sequential compute on registers:
    → Massively parallel fused-multiply-add units

    Instead of manually orchestrated loops:
    → High-level declarative specs (XLA, MLIR, TVM)

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    Wonder why the Coq proof even should be
    different from anything that AI could produce.
    Its not a typical Euclid proof in a few steps,

    it rather uses also enumeration, just like the
    Fly Speck proof, for the Keppler Conjecture. So
    lets see what happens next, could AlphaEvolve

    find the sixth busy beaver?

    Bye

    P.S.: Here picture of an old Busy Beaver ASIC
    (Application-Specific Integrated Circuit)

    Application    Fun
    Technology    1500
    Manufacturer    VLSI Tech
    Type    Semester Thesis
    Package    DIP64
    Dimensions    3200μm x 3200μm
    Gates    2 kGE
    Voltage    5 V
    Clock    20 MHz

    The Busy Beaver Coprocessor has been designed to solve the Busy
    Beaver Function for 5 states. This function (also known as the Rado's
    Sigma Function) is an uncomputable problem from information theory.
    The input argument is a natural number 'n' that represents the
    complexity of an algorithm described as a Turing Machine.
    http://asic.ethz.ch/cg/1990/Busy_Beaver.html

    Mild Shock schrieb:
    Hi,

    What we thought:

    Prediction 5 . It will never be proved that
    Σ(5) = 4,098 and S(5) = 47,176,870.
    -- Allen H. Brady, 1990  .

    How it started:

    To investigate AlphaEvolve’s breadth, we applied
    the system to over 50 open problems in mathematical
    analysis, geometry, combinatorics and number theory.
    The system’s flexibility enabled us to set up most
    experiments in a matter of hours. In roughly 75% of
    cases, it rediscovered state-of-the-art solutions, to
    the best of our knowledge.
    https://deepmind.google/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/


    How its going:

    We prove that S(5) = 47, 176, 870 using the Coq proof
    assistant. The Busy Beaver value S(n) is the maximum
    number of steps that an n-state 2-symbol Turing machine
    can perform from the all-zero tape before halting, and
    S was historically introduced by Tibor Radó in 1962 as
    one of the simplest examples of an uncomputable function.
    The proof enumerates 181,385,789 Turing machines with 5
    states and, for each machine, decides whether it halts or
    not. Our result marks the first determination of a new
    Busy Beaver value in over 40 years and the first Busy
    Beaver value ever to be formally verified, attesting to the
    effectiveness of massively collaborative online research
    https://arxiv.org/pdf/2509.12337

    They claim not having used much AI. But could for
    example AlphaEvolve do it somehow nevertheless, more or
    less autonomously, and find the sixth busy beaver?

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Maciej_Wo=C5=BAniak?=@mlwozniak@wp.pl to sci.physics,sci.physics.relativity,comp.lang.prolog on Mon Dec 1 12:09:20 2025
    From Newsgroup: comp.lang.prolog

    On 12/1/2025 11:25 AM, Mild Shock wrote:
    Hi,

    1) Classical computing = Boolean logic + von Neumann architecture




    For decades, all mainstream computation was built on:
    Boolean algebra
    Logic gates
    Scalar operations executed sequentially
    Memory and compute as separate blocks
    Even floating-point arithmetic was implemented on top of Boolean logic.

    This shaped how programmers think — algorithms expressed
    as symbolic operations, control flow, and discrete steps.

    2) AI accelerators break from that model

    No, they don't, they just add one (or some)
    more layer on top of it.

    On the other hand, neural networks were
    always outside. So were quantum computers.
    It was never the only one and never the
    most powerful one.




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.physics,sci.physics.relativity,comp.lang.prolog on Mon Dec 1 12:15:31 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    You wrote:

    No, they don't, they just add one (or some)
    more layer on top of it.

    Techically they are not von Neuman architecture.
    Unified Memory with Multiple Tensor Cores is
    not von Neuman architecture. But the architecture

    is possibly toned down by Data Flow, so that
    in principle one can run the same thing on a
    von Neuman architecture.

    But in principle the architecture is rather:

    parallel random-access machine (parallel RAM
    or PRAM) is a shared-memory abstract machine. https://en.wikipedia.org/wiki/Parallel_RAM

    The above class of machines is not widely know.
    But PRAM has been also studied, already in the 80's.

    Bye

    Maciej Woźniak schrieb:
    On 12/1/2025 11:25 AM, Mild Shock wrote:
    Hi,

    1) Classical computing = Boolean logic + von Neumann architecture




    For decades, all mainstream computation was built on:
    Boolean algebra
    Logic gates
    Scalar operations executed sequentially
    Memory and compute as separate blocks
    Even floating-point arithmetic was implemented on top of Boolean logic.

    This shaped how programmers think — algorithms expressed
    as symbolic operations, control flow, and discrete steps.

    2) AI accelerators break from that model

    No, they don't, they just add one (or some)
    more layer on top of it.

    On the other hand, neural networks were
    always outside. So were quantum computers.
    It was never the only one and never the
    most powerful one.





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Maciej_Wo=C5=BAniak?=@mlwozniak@wp.pl to sci.physics,sci.physics.relativity,comp.lang.prolog on Mon Dec 1 13:23:50 2025
    From Newsgroup: comp.lang.prolog

    On 12/1/2025 12:15 PM, Mild Shock wrote:
    Hi,

    You wrote:

    No, they don't, they just add one (or some)
    more layer on top of it.

    Techically they are not von Neuman architecture.
    Unified Memory with Multiple Tensor Cores is
    not von Neuman architecture.

    We can use von Neumann architecture
    to emulate other architectures, but as long as it
    is performed by our computers it is technically
    von Neumann's.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.physics,sci.physics.relativity,comp.lang.prolog on Mon Dec 1 17:12:45 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Simulation is not so easy. You would need an
    element of non-determinism, or if you want
    call it randomness. Because PRAM has this

    instructions, ERCW, CRCW, etc..

    - Concurrent read concurrent write (CRCW)—
    multiple processors can read and write. A
    CRCW PRAM is sometimes called a concurrent
    random-access machine.
    https://en.wikipedia.org/wiki/Parallel_RAM

    Modelling via von Neuman what happens there
    can be quite challenging. At least it doesn't
    allow for a direct modelling.

    What a later processor sees, depends extremly
    on the timing and which processor "wins" the
    write.

    Also I don't know what it would buy you
    intellectually to simulate a PRAM on a random
    von Neuman machine. The random von Neuman

    machine could need more steps than the PRAM
    in summary, because it has to simulate a PRAM.
    But I guess its the intellectual questioning

    that needs also a revision when confronted
    with the new architecture of unified memory
    and tensor processing cores.

    Bye

    Maciej Woźniak schrieb:
    On 12/1/2025 12:15 PM, Mild Shock wrote:
    Hi,

    You wrote:

    No, they don't, they just add one (or some)
    more layer on top of it.

    Techically they are not von Neuman architecture.
    Unified Memory with Multiple Tensor Cores is
    not von Neuman architecture.

    We can use von Neumann architecture
    to emulate other architectures, but as long as it
    is performed by our computers it is technically
    von Neumann's.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.physics,sci.physics.relativity,comp.lang.prolog on Mon Dec 1 17:31:48 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    PRAM effects are a little bit contrived in AI
    accelerators, since they work with matrix tiles,
    that are locally cached to the tensor core.

    But CRCW is quite cool for machine learning.
    When the weights get updated. ChatGPT suggested
    me to read this paper:

    Hogwild!: A Lock-Free Approach to
    Parallelizing Stochastic Gradient Descent
    https://arxiv.org/pdf/1106.5730

    Didn't read yet...

    You might also have read the recent report how
    Google trained Gemini. They had to deal with other
    issues as well, like failure of a whole

    tensore core.

    Bye

    Mild Shock schrieb:
    Hi,

    Simulation is not so easy. You would need an
    element of non-determinism, or if you want
    call it randomness. Because PRAM has this

    instructions, ERCW, CRCW, etc..

    - Concurrent read concurrent write (CRCW)—
    multiple processors can read and write. A
    CRCW PRAM is sometimes called a concurrent
    random-access machine.
    https://en.wikipedia.org/wiki/Parallel_RAM

    Modelling via von Neuman what happens there
    can be quite challenging. At least it doesn't
    allow for a direct modelling.

    What a later processor sees, depends extremly
    on the timing and which processor "wins" the
    write.

    Also I don't know what it would buy you
    intellectually to simulate a PRAM on a random
    von Neuman machine. The random von Neuman

    machine could need more steps than the PRAM
    in summary, because it has to simulate a PRAM.
    But I guess its the intellectual questioning

    that needs also a revision when confronted
    with the new architecture of unified memory
    and tensor processing cores.

    Bye

    Maciej Woźniak schrieb:
    On 12/1/2025 12:15 PM, Mild Shock wrote:
    Hi,

    You wrote:

    No, they don't, they just add one (or some)
    more layer on top of it.

    Techically they are not von Neuman architecture.
    Unified Memory with Multiple Tensor Cores is
    not von Neuman architecture.

    We can use von Neumann architecture
    to emulate other architectures, but as long as it
    is performed by our computers it is technically
    von Neumann's.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Maciej_Wo=C5=BAniak?=@mlwozniak@wp.pl to sci.physics,sci.physics.relativity,comp.lang.prolog on Mon Dec 1 17:59:17 2025
    From Newsgroup: comp.lang.prolog

    On 12/1/2025 5:12 PM, Mild Shock wrote:
    Hi,

    Simulation is not so easy.

    I've never said it is easy. Some randomness
    or pseudorandomness existed for a long time,
    it's not enough for me to speak about a
    different architecture.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.physics,sci.physics.relativity,comp.lang.prolog on Mon Dec 1 18:02:35 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    The bottom line is often, PRAMs might be
    closer to physics. Especially for certain
    machine learning algorithms or questions

    from modelling perception or action. You
    might get better results if you model the
    problem in terms of Boltzman machines,

    or whatever from the arsenal of physics.

    Bye

    Mild Shock schrieb:
    Hi,

    PRAM effects are a little bit contrived in AI
    accelerators, since they work with matrix tiles,
    that are locally cached to the tensor core.

    But CRCW is quite cool for machine learning.
    When the weights get updated. ChatGPT suggested
    me to read this paper:

    Hogwild!: A Lock-Free Approach to
    Parallelizing Stochastic Gradient Descent
    https://arxiv.org/pdf/1106.5730

    Didn't read yet...

    You might also have read the recent report how
    Google trained Gemini. They had to deal with other
    issues as well, like failure of a whole

    tensore core.

    Bye

    Mild Shock schrieb:
    Hi,

    Simulation is not so easy. You would need an
    element of non-determinism, or if you want
    call it randomness. Because PRAM has this

    instructions, ERCW, CRCW, etc..

    - Concurrent read concurrent write (CRCW)—
    multiple processors can read and write. A
    CRCW PRAM is sometimes called a concurrent
    random-access machine.
    https://en.wikipedia.org/wiki/Parallel_RAM

    Modelling via von Neuman what happens there
    can be quite challenging. At least it doesn't
    allow for a direct modelling.

    What a later processor sees, depends extremly
    on the timing and which processor "wins" the
    write.

    Also I don't know what it would buy you
    intellectually to simulate a PRAM on a random
    von Neuman machine. The random von Neuman

    machine could need more steps than the PRAM
    in summary, because it has to simulate a PRAM.
    But I guess its the intellectual questioning

    that needs also a revision when confronted
    with the new architecture of unified memory
    and tensor processing cores.

    Bye

    Maciej Woźniak schrieb:
    On 12/1/2025 12:15 PM, Mild Shock wrote:
    Hi,

    You wrote:

    No, they don't, they just add one (or some)
    more layer on top of it.

    Techically they are not von Neuman architecture.
    Unified Memory with Multiple Tensor Cores is
    not von Neuman architecture.

    We can use von Neumann architecture
    to emulate other architectures, but as long as it
    is performed by our computers it is technically
    von Neumann's.




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.physics,sci.physics.relativity,comp.lang.prolog on Mon Dec 1 18:05:12 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    The bottom line is often, PRAMs might be
    closer to physics. Especially for certain
    machine learning algorithms or questions

    from modelling perception or action. You
    might get better results if you model the
    problem in terms of Boltzman machines,

    or whatever from the arsenal of physics.

    Bye

    P.S.: Whats was a little popular for a certain
    moment of time, was also the idea of partical
    swarm optimization, for machine learning or

    for problem solving:

    Particle swarm optimization https://en.wikipedia.org/wiki/Particle_swarm_optimization

    Not sure how much of it got supperseeded
    by multi sample updates, or some such.

    Maciej Woźniak schrieb:
    On 12/1/2025 5:12 PM, Mild Shock wrote:
    Hi,

    Simulation is not so easy.

    I've never said it is easy. Some randomness
    or pseudorandomness existed for a long time,
    it's not enough for me to speak about a
    different architecture.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.physics,sci.physics.relativity,comp.lang.prolog on Mon Dec 1 18:08:22 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    The bottom line is often, PRAMs might be
    closer to physics. Especially for certain
    machine learning algorithms or questions

    from modelling perception or action. You
    might get better results if you model the
    problem in terms of Boltzman machines,

    or whatever from the arsenal of physics.

    Bye

    P.S.: Whats was a little popular for a certain
    moment of time, was also the idea of partical
    swarm optimization, for machine learning or

    for problem solving:

    Particle swarm optimization https://en.wikipedia.org/wiki/Particle_swarm_optimization

    Not sure how much of it got supperseeded,
    it mostlikey survides in AlphaEvolve by Google,
    looks like a genetic algorithm thing, which

    is another name for this "physics".

    Maciej Woźniak schrieb:
    On 12/1/2025 5:12 PM, Mild Shock wrote:
    Hi,

    Simulation is not so easy.

    I've never said it is easy. Some randomness
    or pseudorandomness existed for a long time,
    it's not enough for me to speak about a
    different architecture.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.physics,sci.physics.relativity,comp.lang.prolog on Mon Dec 1 18:25:30 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    But the topic of physics could be much more
    difficult to discuss, then the topic of von
    Neumann machines, like building your obligatory

    hobby LED cube with a Rasperry Pi von Neuman
    style with one thread. So I basically intend not
    to respond anymore, to this silly thread, since

    I was flamed for these things being off topic to
    physics. Already just few months ago these
    AI pioneers got physics nobel prices:

    Why did they get Physics Nobel prices?

    John J. Hopfield
    Geoffrey Hinton
    https://www.nobelprize.org/prizes/physics/2024/summary/

    They both worked in neural networks:

    The Nobel Prize in Physics 2024 was awarded
    jointly to John J. Hopfield and Geoffrey Hinton
    "for foundational discoveries and inventions
    that enable machine learning with artificial
    neural networks"

    Bye

    P.S.: Not to mention from Google DeepMind:

    Demis Hassabis
    https://www.nobelprize.org/prizes/chemistry/2024/press-release/

    Its also a premier of artificial intelligence nobel:

    Demis Hassabis and John Jumper have developed an AI
    model to solve a 50-year-old problem: predicting
    proteins’ complex structures. These discoveries
    hold enormous potential.

    Mild Shock schrieb:
    Hi,

    The bottom line is often, PRAMs might be
    closer to physics. Especially for certain
    machine learning algorithms or questions

    from modelling perception or action. You
    might get better results if you model the
    problem in terms of Boltzman machines,

    or whatever from the arsenal of physics.

    Bye

    P.S.: Whats was a little popular for a certain
    moment of time, was also the idea of partical
    swarm optimization, for machine learning or

    for problem solving:

    Particle swarm optimization https://en.wikipedia.org/wiki/Particle_swarm_optimization

    Not sure how much of it got supperseeded,
    it mostlikey survides in AlphaEvolve by Google,
    looks like a genetic algorithm thing, which

    is another name for this "physics".

    Maciej Woźniak schrieb:
    On 12/1/2025 5:12 PM, Mild Shock wrote:
    Hi,

    Simulation is not so easy.

    I've never said it is easy. Some randomness
    or pseudorandomness existed for a long time,
    it's not enough for me to speak about a
    different architecture.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.physics,sci.physics.relativity,comp.lang.prolog on Tue Dec 2 17:18:59 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Do not underestimate turing machines. I said neurons
    in the "head". But a turing machine has two parts a "head"
    and a moving "tape". It can then write ZFC formulas on

    a "tape". But I haven't studied the proposals yet,

    but its from here:

    The Undecidability of BB(748)
    Understanding Gödel’s Incompleteness Theorems
    Johannes Riebel - March 2023 https://www.ingo-blechschmidt.eu/assets/bachelor-thesis-undecidability-bb748.pdf

    The problem was proposed already here:

    The Busy Beaver Frontier
    Scott Aaronson
    https://www.scottaaronson.com/papers/bb.pdf

    Bye

    Richard Damon schrieb:
    On 12/1/25 6:08 AM, Mild Shock wrote:
    Hi,

    Quizz: How much neurons are necessary in the
    head of turning machine, to simulate ZFC?

    Which is just a category error, as ZFC is a set of definitions, and
    thus not something that can be "simulated"

    Also, "Turning Machines" (if you mean Turing Machines) don't have
    "neurons".


    You have possibly to look up some modelling
    of the logic of ZFC by Bernays. Don't know the

    details but maybe check out:

    The Undecidability of BB(748)
    Understanding Godels Incompleteness Theorems
    Johannes Riebel - March 2023

    https://www.ingo-blechschmidt.eu/assets/bachelor-thesis-undecidability- bb748.pdf

    Bye

    But that "Modeling" isn't the sort of thing you "simulate".

    One problem is we haven't found a way to actually "reason" with
    "neurons".


    Mild Shock schrieb:
    Hi,

    Quizz: How much neurons are necessary in the
    head of turning machine, to simulate ZFC?

    You have possibly to look up some modelling
    of the logic of ZFC by Bernays. Don't know the

    details but maybe check out:

    The Undecidability of BB(748)
    Understanding Godels Incompleteness Theorems
    Johannes Riebel - March 2023 https://www.ingo-blechschmidt.eu/assets/bachelor-thesis-undecidability-bb748.pdf


    Bye

    Mild Shock schrieb:
    Hi,

    I am doing the wake-up call until everybody
    gets ear-bleeding. It just too cringe to
    see the symbolics computing morons struggle

    with connectionism. But given that humans
    have a brain with neurons, it should be obvious
    that symbolism and connectionism are just two

    sides of the same coin.

    Good Luck!

    Bye

    Mild Shock schrieb:
    Hi,

    1) Classical computing = Boolean logic + von Neumann architecture

    For decades, all mainstream computation was built on:
    Boolean algebra
    Logic gates
    Scalar operations executed sequentially
    Memory and compute as separate blocks
    Even floating-point arithmetic was implemented on top of Boolean logic.

    This shaped how programmers think — algorithms expressed
    as symbolic operations, control flow, and discrete steps.

    2) AI accelerators break from that model

    Modern accelerators — GPUs, TPUs, NPUs, and custom matrix
    engines — use a different computational substrate:

    Instead of Boolean logic:
    → Bulk linear algebra over vectors/tensors

    Instead of instruction-by-instruction control:
    → Dataflow graphs

    Instead of sequential compute on registers:
    → Massively parallel fused-multiply-add units

    Instead of manually orchestrated loops:
    → High-level declarative specs (XLA, MLIR, TVM)

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    Wonder why the Coq proof even should be
    different from anything that AI could produce.
    Its not a typical Euclid proof in a few steps,

    it rather uses also enumeration, just like the
    Fly Speck proof, for the Keppler Conjecture. So
    lets see what happens next, could AlphaEvolve

    find the sixth busy beaver?

    Bye

    P.S.: Here picture of an old Busy Beaver ASIC
    (Application-Specific Integrated Circuit)

    Application    Fun
    Technology    1500
    Manufacturer    VLSI Tech
    Type    Semester Thesis
    Package    DIP64
    Dimensions    3200μm x 3200μm
    Gates    2 kGE
    Voltage    5 V
    Clock    20 MHz

    The Busy Beaver Coprocessor has been designed to solve the Busy
    Beaver Function for 5 states. This function (also known as the
    Rado's Sigma Function) is an uncomputable problem from information
    theory. The input argument is a natural number 'n' that represents
    the complexity of an algorithm described as a Turing Machine.
    http://asic.ethz.ch/cg/1990/Busy_Beaver.html

    Mild Shock schrieb:
    Hi,

    What we thought:

    Prediction 5 . It will never be proved that
    Σ(5) = 4,098 and S(5) = 47,176,870.
    -- Allen H. Brady, 1990  .

    How it started:

    To investigate AlphaEvolve’s breadth, we applied
    the system to over 50 open problems in mathematical
    analysis, geometry, combinatorics and number theory.
    The system’s flexibility enabled us to set up most
    experiments in a matter of hours. In roughly 75% of
    cases, it rediscovered state-of-the-art solutions, to
    the best of our knowledge.
    https://deepmind.google/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/


    How its going:

    We prove that S(5) = 47, 176, 870 using the Coq proof
    assistant. The Busy Beaver value S(n) is the maximum
    number of steps that an n-state 2-symbol Turing machine
    can perform from the all-zero tape before halting, and
    S was historically introduced by Tibor Radó in 1962 as
    one of the simplest examples of an uncomputable function.
    The proof enumerates 181,385,789 Turing machines with 5
    states and, for each machine, decides whether it halts or
    not. Our result marks the first determination of a new
    Busy Beaver value in over 40 years and the first Busy
    Beaver value ever to be formally verified, attesting to the
    effectiveness of massively collaborative online research
    https://arxiv.org/pdf/2509.12337

    They claim not having used much AI. But could for
    example AlphaEvolve do it somehow nevertheless, more or
    less autonomously, and find the sixth busy beaver?

    Bye





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.physics,sci.physics.relativity,comp.lang.prolog on Tue Dec 2 17:19:51 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    The head of a turing machine is usually a finite
    state machine. That digests the tape reading, and
    creates a new top writing or head movement.

    A finite state machines complexity can be measured
    in the number of states. Transitions between states
    are labeled with tape reading and tap wrinting/

    head movement. So the state is not what is writte
    on the tape. Its an internal state. Its relatively
    easy to turn a finite state machine, into an

    artificial neural network. Already ChatGPT does that,
    when reads tokens and writes tokens, just like
    a turning machine.

    "A Turing machine is a mathematical model of
    computation describing an abstract machine that
    manipulates symbols on a strip of tape according
    to a table of rules"
    https://en.wikipedia.org/wiki/Turing_machine

    Its really funnny how people really need some
    ear bleeding to understand the two sides,
    symbolism and connectionsim.

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    Do not underestimate turing machines. I said neurons
    in the "head". But a turing machine has two parts a "head"
    and a moving "tape". It can then write ZFC formulas on

    a "tape". But I haven't studied the proposals yet,

    but its from here:

    The Undecidability of BB(748)
    Understanding Gödel’s Incompleteness Theorems
    Johannes Riebel - March 2023 https://www.ingo-blechschmidt.eu/assets/bachelor-thesis-undecidability-bb748.pdf


    The problem was proposed already here:

    The Busy Beaver Frontier
    Scott Aaronson
    https://www.scottaaronson.com/papers/bb.pdf

    Bye

    Richard Damon schrieb:
    On 12/1/25 6:08 AM, Mild Shock wrote:
    Hi,

    Quizz: How much neurons are necessary in the
    head of turning machine, to simulate ZFC?

    Which is just a category error, as ZFC is a set of definitions, and
    thus not something that can be "simulated"

    Also, "Turning Machines" (if you mean Turing Machines) don't have
    "neurons".


    You have possibly to look up some modelling
    of the logic of ZFC by Bernays. Don't know the

    details but maybe check out:

    The Undecidability of BB(748)
    Understanding Godels Incompleteness Theorems
    Johannes Riebel - March 2023

    https://www.ingo-blechschmidt.eu/assets/bachelor-thesis-undecidability- bb748.pdf

    Bye

    But that "Modeling" isn't the sort of thing you "simulate".

    One problem is we haven't found a way to actually "reason" with
    "neurons".


    Mild Shock schrieb:
    Hi,

    Quizz: How much neurons are necessary in the
    head of turning machine, to simulate ZFC?

    You have possibly to look up some modelling
    of the logic of ZFC by Bernays. Don't know the

    details but maybe check out:

    The Undecidability of BB(748)
    Understanding Godels Incompleteness Theorems
    Johannes Riebel - March 2023
    https://www.ingo-blechschmidt.eu/assets/bachelor-thesis-undecidability-bb748.pdf


    Bye

    Mild Shock schrieb:
    Hi,

    I am doing the wake-up call until everybody
    gets ear-bleeding. It just too cringe to
    see the symbolics computing morons struggle

    with connectionism. But given that humans
    have a brain with neurons, it should be obvious
    that symbolism and connectionism are just two

    sides of the same coin.

    Good Luck!

    Bye

    Mild Shock schrieb:
    Hi,

    1) Classical computing = Boolean logic + von Neumann architecture

    For decades, all mainstream computation was built on:
    Boolean algebra
    Logic gates
    Scalar operations executed sequentially
    Memory and compute as separate blocks
    Even floating-point arithmetic was implemented on top of Boolean logic. >>>>
    This shaped how programmers think — algorithms expressed
    as symbolic operations, control flow, and discrete steps.

    2) AI accelerators break from that model

    Modern accelerators — GPUs, TPUs, NPUs, and custom matrix
    engines — use a different computational substrate:

    Instead of Boolean logic:
    → Bulk linear algebra over vectors/tensors

    Instead of instruction-by-instruction control:
    → Dataflow graphs

    Instead of sequential compute on registers:
    → Massively parallel fused-multiply-add units

    Instead of manually orchestrated loops:
    → High-level declarative specs (XLA, MLIR, TVM)

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    Wonder why the Coq proof even should be
    different from anything that AI could produce.
    Its not a typical Euclid proof in a few steps,

    it rather uses also enumeration, just like the
    Fly Speck proof, for the Keppler Conjecture. So
    lets see what happens next, could AlphaEvolve

    find the sixth busy beaver?

    Bye

    P.S.: Here picture of an old Busy Beaver ASIC
    (Application-Specific Integrated Circuit)

    Application    Fun
    Technology    1500
    Manufacturer    VLSI Tech
    Type    Semester Thesis
    Package    DIP64
    Dimensions    3200μm x 3200μm
    Gates    2 kGE
    Voltage    5 V
    Clock    20 MHz

    The Busy Beaver Coprocessor has been designed to solve the Busy
    Beaver Function for 5 states. This function (also known as the
    Rado's Sigma Function) is an uncomputable problem from information
    theory. The input argument is a natural number 'n' that represents
    the complexity of an algorithm described as a Turing Machine.
    http://asic.ethz.ch/cg/1990/Busy_Beaver.html

    Mild Shock schrieb:
    Hi,

    What we thought:

    Prediction 5 . It will never be proved that
    Σ(5) = 4,098 and S(5) = 47,176,870.
    -- Allen H. Brady, 1990  .

    How it started:

    To investigate AlphaEvolve’s breadth, we applied
    the system to over 50 open problems in mathematical
    analysis, geometry, combinatorics and number theory.
    The system’s flexibility enabled us to set up most
    experiments in a matter of hours. In roughly 75% of
    cases, it rediscovered state-of-the-art solutions, to
    the best of our knowledge.
    https://deepmind.google/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/


    How its going:

    We prove that S(5) = 47, 176, 870 using the Coq proof
    assistant. The Busy Beaver value S(n) is the maximum
    number of steps that an n-state 2-symbol Turing machine
    can perform from the all-zero tape before halting, and
    S was historically introduced by Tibor Radó in 1962 as
    one of the simplest examples of an uncomputable function.
    The proof enumerates 181,385,789 Turing machines with 5
    states and, for each machine, decides whether it halts or
    not. Our result marks the first determination of a new
    Busy Beaver value in over 40 years and the first Busy
    Beaver value ever to be formally verified, attesting to the
    effectiveness of massively collaborative online research
    https://arxiv.org/pdf/2509.12337

    They claim not having used much AI. But could for
    example AlphaEvolve do it somehow nevertheless, more or
    less autonomously, and find the sixth busy beaver?

    Bye






    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.physics,sci.physics.relativity,comp.lang.prolog on Tue Dec 2 17:20:40 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    You might also try this here:

    McCulloch, Warren S.; Pitts, Walter (1943-12-01).
    "A logical calculus of the ideas immanent in
    nervous activity". The Bulletin of Mathematical
    Biophysics. 5 (4): 115–133. https://www.cs.cmu.edu/~epxing/Class/10715/reading/McCulloch.and.Pitts.pdf

    It has a simple neuron model, and shows
    for example in Figure 1. How it can act
    in a Boolean algebra way.

    If you have Booean algebra, you can also
    build finite state machine. You can encode
    state as bit vectors.

    Bye

    Mild Shock schrieb:
    Hi,

    The head of a turing machine is usually a finite
    state machine. That digests the tape reading, and
    creates a new top writing or head movement.

    A finite state machines complexity can be measured
    in the number of states. Transitions between states
    are labeled with tape reading and tap wrinting/

    head movement. So the state is not what is writte
    on the tape. Its an internal state. Its relatively
    easy to turn a finite state machine, into an

    artificial neural network. Already ChatGPT does that,
    when reads tokens and writes tokens, just like
    a turning machine.

    "A Turing machine is a mathematical model of
    computation describing an abstract machine that
    manipulates symbols on a strip of tape according
    to a table of rules"
    https://en.wikipedia.org/wiki/Turing_machine

    Its really funnny how people really need some
    ear bleeding to understand the two sides,
    symbolism and connectionsim.

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    Do not underestimate turing machines. I said neurons
    in the "head". But a turing machine has two parts a "head"
    and a moving "tape". It can then write ZFC formulas on

    a "tape". But I haven't studied the proposals yet,

    but its from here:

    The Undecidability of BB(748)
    Understanding Gödel’s Incompleteness Theorems
    Johannes Riebel - March 2023
    https://www.ingo-blechschmidt.eu/assets/bachelor-thesis-undecidability-bb748.pdf


    The problem was proposed already here:

    The Busy Beaver Frontier
    Scott Aaronson
    https://www.scottaaronson.com/papers/bb.pdf

    Bye

    Richard Damon schrieb:
    On 12/1/25 6:08 AM, Mild Shock wrote:
    Hi,
    ;
    Quizz: How much neurons are necessary in the
    head of turning machine, to simulate ZFC?
    ;
    Which is just a category error, as ZFC is a set of definitions, and
    thus not something that can be "simulated"
    ;
    Also, "Turning Machines" (if you mean Turing Machines) don't have
    "neurons".
    ;
    ;
    You have possibly to look up some modelling
    of the logic of ZFC by Bernays. Don't know the
    ;
    details but maybe check out:
    ;
    The Undecidability of BB(748)
    Understanding Godels Incompleteness Theorems
    Johannes Riebel - March 2023

    https://www.ingo-blechschmidt.eu/assets/bachelor-thesis-undecidability- bb748.pdf

    ;
    Bye
    ;
    But that "Modeling" isn't the sort of thing you "simulate".
    ;
    One problem is we haven't found a way to actually "reason" with
    "neurons".


    Mild Shock schrieb:
    Hi,

    Quizz: How much neurons are necessary in the
    head of turning machine, to simulate ZFC?

    You have possibly to look up some modelling
    of the logic of ZFC by Bernays. Don't know the

    details but maybe check out:

    The Undecidability of BB(748)
    Understanding Godels Incompleteness Theorems
    Johannes Riebel - March 2023
    https://www.ingo-blechschmidt.eu/assets/bachelor-thesis-undecidability-bb748.pdf


    Bye

    Mild Shock schrieb:
    Hi,

    I am doing the wake-up call until everybody
    gets ear-bleeding. It just too cringe to
    see the symbolics computing morons struggle

    with connectionism. But given that humans
    have a brain with neurons, it should be obvious
    that symbolism and connectionism are just two

    sides of the same coin.

    Good Luck!

    Bye

    Mild Shock schrieb:
    Hi,

    1) Classical computing = Boolean logic + von Neumann architecture

    For decades, all mainstream computation was built on:
    Boolean algebra
    Logic gates
    Scalar operations executed sequentially
    Memory and compute as separate blocks
    Even floating-point arithmetic was implemented on top of Boolean
    logic.

    This shaped how programmers think — algorithms expressed
    as symbolic operations, control flow, and discrete steps.

    2) AI accelerators break from that model

    Modern accelerators — GPUs, TPUs, NPUs, and custom matrix
    engines — use a different computational substrate:

    Instead of Boolean logic:
    → Bulk linear algebra over vectors/tensors

    Instead of instruction-by-instruction control:
    → Dataflow graphs

    Instead of sequential compute on registers:
    → Massively parallel fused-multiply-add units

    Instead of manually orchestrated loops:
    → High-level declarative specs (XLA, MLIR, TVM)

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    Wonder why the Coq proof even should be
    different from anything that AI could produce.
    Its not a typical Euclid proof in a few steps,

    it rather uses also enumeration, just like the
    Fly Speck proof, for the Keppler Conjecture. So
    lets see what happens next, could AlphaEvolve

    find the sixth busy beaver?

    Bye

    P.S.: Here picture of an old Busy Beaver ASIC
    (Application-Specific Integrated Circuit)

    Application    Fun
    Technology    1500
    Manufacturer    VLSI Tech
    Type    Semester Thesis
    Package    DIP64
    Dimensions    3200μm x 3200μm
    Gates    2 kGE
    Voltage    5 V
    Clock    20 MHz

    The Busy Beaver Coprocessor has been designed to solve the Busy
    Beaver Function for 5 states. This function (also known as the
    Rado's Sigma Function) is an uncomputable problem from information >>>>>> theory. The input argument is a natural number 'n' that represents >>>>>> the complexity of an algorithm described as a Turing Machine.
    http://asic.ethz.ch/cg/1990/Busy_Beaver.html

    Mild Shock schrieb:
    Hi,

    What we thought:

    Prediction 5 . It will never be proved that
    Σ(5) = 4,098 and S(5) = 47,176,870.
    -- Allen H. Brady, 1990  .

    How it started:

    To investigate AlphaEvolve’s breadth, we applied
    the system to over 50 open problems in mathematical
    analysis, geometry, combinatorics and number theory.
    The system’s flexibility enabled us to set up most
    experiments in a matter of hours. In roughly 75% of
    cases, it rediscovered state-of-the-art solutions, to
    the best of our knowledge.
    https://deepmind.google/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/


    How its going:

    We prove that S(5) = 47, 176, 870 using the Coq proof
    assistant. The Busy Beaver value S(n) is the maximum
    number of steps that an n-state 2-symbol Turing machine
    can perform from the all-zero tape before halting, and
    S was historically introduced by Tibor Radó in 1962 as
    one of the simplest examples of an uncomputable function.
    The proof enumerates 181,385,789 Turing machines with 5
    states and, for each machine, decides whether it halts or
    not. Our result marks the first determination of a new
    Busy Beaver value in over 40 years and the first Busy
    Beaver value ever to be formally verified, attesting to the
    effectiveness of massively collaborative online research
    https://arxiv.org/pdf/2509.12337

    They claim not having used much AI. But could for
    example AlphaEvolve do it somehow nevertheless, more or
    less autonomously, and find the sixth busy beaver?

    Bye







    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.physics,sci.physics.relativity,comp.lang.prolog on Tue Dec 2 17:39:03 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    If you know BB(N), you have a halting decision procedure
    for N-turing machines. Since if BB(N) is maximum number
    S(N) of steps before halting,

    you can just run an arbitrary turing machine, and when
    its steps exceeds S(N), you know its not a halting
    turing machine.

    So knowing BB(N) makes the halting problem decidable.
    But the halting problem is not decidable. So there
    must be some M maybe where BB(M) has no S(N) , no

    maximum. Idea is to construct turing machines that
    relate to consistency problems, consistency problems
    can be even harder than halting problems, we might

    ask for the opposite, does a program never halt.
    Since never halt could be interpreted that no
    inconsistency is derived. Again knowing BB(N) would

    help, since dedidability via S(N) is established both
    ways, saying "Yes" to halt, and saying "No" to halt.
    So we can show a reducibility from consistency

    to busy beaver, I guess.

    Bye
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.physics,sci.physics.relativity,comp.lang.prolog on Tue Dec 2 17:43:49 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    If you know BB(N), you have a halting decision procedure
    for N-turing machines. Since if BB(N) is maximum number
    S(N) of steps before halting,

    you can just run an arbitrary turing machine, and when
    its steps exceeds S(N), you know its not a halting
    turing machine.

    So knowing BB(N) makes the halting problem decidable.
    But the halting problem is not decidable. So there
    must be some M maybe where BB(M) has no S(N) , no

    maximum. Idea is to construct turing machines that
    relate to consistency problems, consistency problems
    can be even harder than halting problems, we might

    ask for the opposite, does a program never halt.
    Since never halt could be interpreted that no
    inconsistency is derived. Again knowing BB(N) would

    help, since dedidability via S(N) is established both
    ways, saying "Yes" to halt, and saying "No" to not halt.
    So we can show a reducibility from consistency

    to busy beaver, I guess.

    Bye
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.physics,sci.physics.relativity,comp.lang.prolog on Tue Dec 2 23:18:27 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I don't have a problem with the notion of computability.
    What makes you think citing an interesting research paper,
    implies that I have a problem with computability?

    Could you explain yourself?

    Bye

    Richard Damon schrieb:
    On 12/2/25 11:06 AM, Mild Shock wrote:
    Hi,

    Do not underestimate turing machines. I said neurons
    in the "head". But a turing machine has to parts a "head"
    and a moving "tape". It can then write ZFC formulas on

    I think your problem is you just don't understand what computing is,
    as used in Computation theory.


    Mild Shock schrieb:
    Hi,

    If you know BB(N), you have a halting decision procedure
    for N-turing machines. Since if BB(N) is maximum number
    S(N) of steps before halting,

    you can just run an arbitrary turing machine, and when
    its steps exceeds S(N), you know its not a halting
    turing machine.

    So knowing BB(N) makes the halting problem decidable.
    But the halting problem is not decidable. So there
    must be some M maybe where BB(M) has no S(N) , no

    maximum. Idea is to construct turing machines that
    relate to consistency problems, consistency problems
    can be even harder than halting problems, we might

    ask for the opposite, does a program never halt.
    Since never halt could be interpreted that no
    inconsistency is derived. Again knowing BB(N) would

    help, since dedidability via S(N) is established both
    ways, saying "Yes" to halt, and saying "No" to not halt.
    So we can show a reducibility from consistency

    to busy beaver, I guess.

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Thomas Heger@ttt_heg@web.de to sci.physics,sci.physics.relativity,comp.lang.prolog on Wed Dec 3 07:17:27 2025
    From Newsgroup: comp.lang.prolog

    Am Montag000001, 01.12.2025 um 13:23 schrieb Maciej Woźniak:
    On 12/1/2025 12:15 PM, Mild Shock wrote:
    Hi,

    You wrote:

    No, they don't, they just add one (or some)
    more layer on top of it.

    Techically they are not von Neuman architecture.
    Unified Memory with Multiple Tensor Cores is
    not von Neuman architecture.

    We can use von Neumann architecture
    to emulate other architectures, but as long as it
    is performed by our computers it is technically
    von Neumann's.

    Did you know, that 'von Neuman architecture' was actually invented and patented by Konrad Zuse in Germany in the early 1930th?

    The liberators stole it from Zuse (like zillions of other patents from
    other German inventors).

    TH
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Python@python@cccp.invalid to sci.physics,sci.physics.relativity,comp.lang.prolog on Wed Dec 3 06:46:16 2025
    From Newsgroup: comp.lang.prolog

    Le 03/12/2025 à 07:11, Thomas Heger a écrit :
    Am Montag000001, 01.12.2025 um 13:23 schrieb Maciej Woźniak:
    On 12/1/2025 12:15 PM, Mild Shock wrote:
    Hi,

    You wrote:

    No, they don't, they just add one (or some)
    more layer on top of it.

    Techically they are not von Neuman architecture.
    Unified Memory with Multiple Tensor Cores is
    not von Neuman architecture.

    We can use von Neumann architecture
    to emulate other architectures, but as long as it
    is performed by our computers it is technically
    von Neumann's.

    Did you know, that 'von Neuman architecture' was actually invented and patented by Konrad Zuse in Germany in the early 1930th?

    The liberators stole it from Zuse (like zillions of other patents from
    other German inventors).

    In this specific case: this is completely WRONG.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Thomas 'PointedEars' Lahn@PointedEars@web.de to sci.physics,sci.physics.relativity,comp.lang.prolog on Wed Dec 3 08:02:15 2025
    From Newsgroup: comp.lang.prolog

    Thomas Heger wrote:
    Am Montag000001, 01.12.2025 um 13:23 schrieb Maciej Woźniak:
    On 12/1/2025 12:15 PM, Mild Shock wrote:
    You wrote:

    No, they don't, they just add one (or some)
    more layer on top of it.

    Techically they are not von Neuman architecture.
    Unified Memory with Multiple Tensor Cores is
    not von Neuman architecture.

    We can use von Neumann architecture
    to emulate other architectures, but as long as it
    is performed by our computers it is technically
    von Neumann's.

    Did you know, that 'von Neuman architecture'

    It really is spelled _von Neumann_, named after the Hungarian-American
    polymath John von Neumann. He was born (as Neumann János Lajos) into a non-observant Jewish family, and raised, in Budapest, then in the Empire of Austria-Hungary. His family name may be of German origin.

    <https://en.wikipedia.org/wiki/John_von_Neumann#Life_and_education>

    was actually invented and patented by Konrad Zuse in Germany in the early 1930th?

    NOT true. Von Neumann's architecture "was based on the work of J. Presper Eckert and John Mauchly, inventors of ENIAC and its successor, EDVAC."

    <https://en.wikipedia.org/wiki/John_von_Neumann#Computer_science>

    ENIAC (completed in 1945) and EDVAC (completed in 1949, in operation from
    1951 to 1962) were "programmable, electronic, general-purpose digital computers". They were NOT based on or copies of the Z series of computers
    as invented and built by Konrad Zuse; the first computer of that series that was fully digital was the Z5, ordered in 1950 and delivered in 1953:

    <https://en.wikipedia.org/wiki/ENIAC>
    <https://en.wikipedia.org/wiki/EDVAC> <https://en.wikipedia.org/wiki/Z5_(computer)>

    The liberators stole it from Zuse (like zillions of other patents from
    other German inventors).

    Cite evidence.

    F'up2 comp.lang.misc
    --
    PointedEars

    Twitter: @PointedEars2
    Please do not cc me. / Bitte keine Kopien per E-Mail.
    --- Synchronet 3.21a-Linux NewsLink 1.2