Hi,
Wonder why the Coq proof even should be
different from anything that AI could produce.
Its not a typical Euclid proof in a few steps,
it rather uses also enumeration, just like the
Fly Speck proof, for the Keppler Conjecture. So
lets see what happens next, could AlphaEvolve
find the sixth busy beaver?
Bye
P.S.: Here picture of an old Busy Beaver ASIC
(Application-Specific Integrated Circuit)
Application Fun
Technology 1500
Manufacturer VLSI Tech
Type Semester Thesis
Package DIP64
Dimensions 3200μm x 3200μm
Gates 2 kGE
Voltage 5 V
Clock 20 MHz
The Busy Beaver Coprocessor has been designed to solve the Busy Beaver Function for 5 states. This function (also known as the Rado's Sigma Function) is an uncomputable problem from information theory. The input argument is a natural number 'n' that represents the complexity of an algorithm described as a Turing Machine. http://asic.ethz.ch/cg/1990/Busy_Beaver.html
Mild Shock schrieb:
Hi,
What we thought:
Prediction 5 . It will never be proved that
Σ(5) = 4,098 and S(5) = 47,176,870.
-- Allen H. Brady, 1990 .
How it started:
To investigate AlphaEvolve’s breadth, we applied
the system to over 50 open problems in mathematical
analysis, geometry, combinatorics and number theory.
The system’s flexibility enabled us to set up most
experiments in a matter of hours. In roughly 75% of
cases, it rediscovered state-of-the-art solutions, to
the best of our knowledge.
https://deepmind.google/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/
How its going:
We prove that S(5) = 47, 176, 870 using the Coq proof
assistant. The Busy Beaver value S(n) is the maximum
number of steps that an n-state 2-symbol Turing machine
can perform from the all-zero tape before halting, and
S was historically introduced by Tibor Radó in 1962 as
one of the simplest examples of an uncomputable function.
The proof enumerates 181,385,789 Turing machines with 5
states and, for each machine, decides whether it halts or
not. Our result marks the first determination of a new
Busy Beaver value in over 40 years and the first Busy
Beaver value ever to be formally verified, attesting to the
effectiveness of massively collaborative online research
https://arxiv.org/pdf/2509.12337
They claim not having used much AI. But could for
example AlphaEvolve do it somehow nevertheless, more or
less autonomously, and find the sixth busy beaver?
Bye
Hi,
1) Classical computing = Boolean logic + von Neumann architecture
For decades, all mainstream computation was built on:
Boolean algebra
Logic gates
Scalar operations executed sequentially
Memory and compute as separate blocks
Even floating-point arithmetic was implemented on top of Boolean logic.
This shaped how programmers think — algorithms expressed
as symbolic operations, control flow, and discrete steps.
2) AI accelerators break from that model
Modern accelerators — GPUs, TPUs, NPUs, and custom matrix
engines — use a different computational substrate:
Instead of Boolean logic:
→ Bulk linear algebra over vectors/tensors
Instead of instruction-by-instruction control:
→ Dataflow graphs
Instead of sequential compute on registers:
→ Massively parallel fused-multiply-add units
Instead of manually orchestrated loops:
→ High-level declarative specs (XLA, MLIR, TVM)
Have Fun!
Bye
Mild Shock schrieb:
Hi,
Wonder why the Coq proof even should be
different from anything that AI could produce.
Its not a typical Euclid proof in a few steps,
it rather uses also enumeration, just like the
Fly Speck proof, for the Keppler Conjecture. So
lets see what happens next, could AlphaEvolve
find the sixth busy beaver?
Bye
P.S.: Here picture of an old Busy Beaver ASIC
(Application-Specific Integrated Circuit)
Application Fun
Technology 1500
Manufacturer VLSI Tech
Type Semester Thesis
Package DIP64
Dimensions 3200μm x 3200μm
Gates 2 kGE
Voltage 5 V
Clock 20 MHz
The Busy Beaver Coprocessor has been designed to solve the Busy Beaver
Function for 5 states. This function (also known as the Rado's Sigma
Function) is an uncomputable problem from information theory. The
input argument is a natural number 'n' that represents the complexity
of an algorithm described as a Turing Machine.
http://asic.ethz.ch/cg/1990/Busy_Beaver.html
Mild Shock schrieb:
Hi,
What we thought:
Prediction 5 . It will never be proved that
Σ(5) = 4,098 and S(5) = 47,176,870.
-- Allen H. Brady, 1990 .
How it started:
To investigate AlphaEvolve’s breadth, we applied
the system to over 50 open problems in mathematical
analysis, geometry, combinatorics and number theory.
The system’s flexibility enabled us to set up most
experiments in a matter of hours. In roughly 75% of
cases, it rediscovered state-of-the-art solutions, to
the best of our knowledge.
https://deepmind.google/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/
How its going:
We prove that S(5) = 47, 176, 870 using the Coq proof
assistant. The Busy Beaver value S(n) is the maximum
number of steps that an n-state 2-symbol Turing machine
can perform from the all-zero tape before halting, and
S was historically introduced by Tibor Radó in 1962 as
one of the simplest examples of an uncomputable function.
The proof enumerates 181,385,789 Turing machines with 5
states and, for each machine, decides whether it halts or
not. Our result marks the first determination of a new
Busy Beaver value in over 40 years and the first Busy
Beaver value ever to be formally verified, attesting to the
effectiveness of massively collaborative online research
https://arxiv.org/pdf/2509.12337
They claim not having used much AI. But could for
example AlphaEvolve do it somehow nevertheless, more or
less autonomously, and find the sixth busy beaver?
Bye
Hi,
I am doing the wake-up call until everybody
gets ear-bleeding. It just too cringe to
see the symbolics computing morons struggle
with connectionism. But given that humans
have a brain with neurons, it should be obvious
that symbolism and connectionism are just two
sides of the same coin.
Good Luck!
Bye
Mild Shock schrieb:
Hi,
1) Classical computing = Boolean logic + von Neumann architecture
For decades, all mainstream computation was built on:
Boolean algebra
Logic gates
Scalar operations executed sequentially
Memory and compute as separate blocks
Even floating-point arithmetic was implemented on top of Boolean logic.
This shaped how programmers think — algorithms expressed
as symbolic operations, control flow, and discrete steps.
2) AI accelerators break from that model
Modern accelerators — GPUs, TPUs, NPUs, and custom matrix
engines — use a different computational substrate:
Instead of Boolean logic:
→ Bulk linear algebra over vectors/tensors
Instead of instruction-by-instruction control:
→ Dataflow graphs
Instead of sequential compute on registers:
→ Massively parallel fused-multiply-add units
Instead of manually orchestrated loops:
→ High-level declarative specs (XLA, MLIR, TVM)
Have Fun!
Bye
Mild Shock schrieb:
Hi,
Wonder why the Coq proof even should be
different from anything that AI could produce.
Its not a typical Euclid proof in a few steps,
it rather uses also enumeration, just like the
Fly Speck proof, for the Keppler Conjecture. So
lets see what happens next, could AlphaEvolve
find the sixth busy beaver?
Bye
P.S.: Here picture of an old Busy Beaver ASIC
(Application-Specific Integrated Circuit)
Application Fun
Technology 1500
Manufacturer VLSI Tech
Type Semester Thesis
Package DIP64
Dimensions 3200μm x 3200μm
Gates 2 kGE
Voltage 5 V
Clock 20 MHz
The Busy Beaver Coprocessor has been designed to solve the Busy
Beaver Function for 5 states. This function (also known as the Rado's
Sigma Function) is an uncomputable problem from information theory.
The input argument is a natural number 'n' that represents the
complexity of an algorithm described as a Turing Machine.
http://asic.ethz.ch/cg/1990/Busy_Beaver.html
Mild Shock schrieb:
Hi,
What we thought:
Prediction 5 . It will never be proved that
Σ(5) = 4,098 and S(5) = 47,176,870.
-- Allen H. Brady, 1990 .
How it started:
To investigate AlphaEvolve’s breadth, we applied
the system to over 50 open problems in mathematical
analysis, geometry, combinatorics and number theory.
The system’s flexibility enabled us to set up most
experiments in a matter of hours. In roughly 75% of
cases, it rediscovered state-of-the-art solutions, to
the best of our knowledge.
https://deepmind.google/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/
How its going:
We prove that S(5) = 47, 176, 870 using the Coq proof
assistant. The Busy Beaver value S(n) is the maximum
number of steps that an n-state 2-symbol Turing machine
can perform from the all-zero tape before halting, and
S was historically introduced by Tibor Radó in 1962 as
one of the simplest examples of an uncomputable function.
The proof enumerates 181,385,789 Turing machines with 5
states and, for each machine, decides whether it halts or
not. Our result marks the first determination of a new
Busy Beaver value in over 40 years and the first Busy
Beaver value ever to be formally verified, attesting to the
effectiveness of massively collaborative online research
https://arxiv.org/pdf/2509.12337
They claim not having used much AI. But could for
example AlphaEvolve do it somehow nevertheless, more or
less autonomously, and find the sixth busy beaver?
Bye
Hi,
1) Classical computing = Boolean logic + von Neumann architecture
For decades, all mainstream computation was built on:
Boolean algebra
Logic gates
Scalar operations executed sequentially
Memory and compute as separate blocks
Even floating-point arithmetic was implemented on top of Boolean logic.
This shaped how programmers think — algorithms expressed
as symbolic operations, control flow, and discrete steps.
2) AI accelerators break from that model
No, they don't, they just add one (or some)
more layer on top of it.
On 12/1/2025 11:25 AM, Mild Shock wrote:
Hi,
1) Classical computing = Boolean logic + von Neumann architecture
For decades, all mainstream computation was built on:
Boolean algebra
Logic gates
Scalar operations executed sequentially
Memory and compute as separate blocks
Even floating-point arithmetic was implemented on top of Boolean logic.
This shaped how programmers think — algorithms expressed
as symbolic operations, control flow, and discrete steps.
2) AI accelerators break from that model
No, they don't, they just add one (or some)
more layer on top of it.
On the other hand, neural networks were
always outside. So were quantum computers.
It was never the only one and never the
most powerful one.
Hi,
You wrote:
No, they don't, they just add one (or some)
more layer on top of it.
Techically they are not von Neuman architecture.
Unified Memory with Multiple Tensor Cores is
not von Neuman architecture.
On 12/1/2025 12:15 PM, Mild Shock wrote:
Hi,
You wrote:
No, they don't, they just add one (or some)
more layer on top of it.
Techically they are not von Neuman architecture.
Unified Memory with Multiple Tensor Cores is
not von Neuman architecture.
We can use von Neumann architecture
to emulate other architectures, but as long as it
is performed by our computers it is technically
von Neumann's.
Hi,
Simulation is not so easy. You would need an
element of non-determinism, or if you want
call it randomness. Because PRAM has this
instructions, ERCW, CRCW, etc..
- Concurrent read concurrent write (CRCW)—
multiple processors can read and write. A
CRCW PRAM is sometimes called a concurrent
random-access machine.
https://en.wikipedia.org/wiki/Parallel_RAM
Modelling via von Neuman what happens there
can be quite challenging. At least it doesn't
allow for a direct modelling.
What a later processor sees, depends extremly
on the timing and which processor "wins" the
write.
Also I don't know what it would buy you
intellectually to simulate a PRAM on a random
von Neuman machine. The random von Neuman
machine could need more steps than the PRAM
in summary, because it has to simulate a PRAM.
But I guess its the intellectual questioning
that needs also a revision when confronted
with the new architecture of unified memory
and tensor processing cores.
Bye
Maciej Woźniak schrieb:
On 12/1/2025 12:15 PM, Mild Shock wrote:
Hi,
You wrote:
No, they don't, they just add one (or some)
more layer on top of it.
Techically they are not von Neuman architecture.
Unified Memory with Multiple Tensor Cores is
not von Neuman architecture.
We can use von Neumann architecture
to emulate other architectures, but as long as it
is performed by our computers it is technically
von Neumann's.
Hi,
Simulation is not so easy.
Hi,
PRAM effects are a little bit contrived in AI
accelerators, since they work with matrix tiles,
that are locally cached to the tensor core.
But CRCW is quite cool for machine learning.
When the weights get updated. ChatGPT suggested
me to read this paper:
Hogwild!: A Lock-Free Approach to
Parallelizing Stochastic Gradient Descent
https://arxiv.org/pdf/1106.5730
Didn't read yet...
You might also have read the recent report how
Google trained Gemini. They had to deal with other
issues as well, like failure of a whole
tensore core.
Bye
Mild Shock schrieb:
Hi,
Simulation is not so easy. You would need an
element of non-determinism, or if you want
call it randomness. Because PRAM has this
instructions, ERCW, CRCW, etc..
- Concurrent read concurrent write (CRCW)—
multiple processors can read and write. A
CRCW PRAM is sometimes called a concurrent
random-access machine.
https://en.wikipedia.org/wiki/Parallel_RAM
Modelling via von Neuman what happens there
can be quite challenging. At least it doesn't
allow for a direct modelling.
What a later processor sees, depends extremly
on the timing and which processor "wins" the
write.
Also I don't know what it would buy you
intellectually to simulate a PRAM on a random
von Neuman machine. The random von Neuman
machine could need more steps than the PRAM
in summary, because it has to simulate a PRAM.
But I guess its the intellectual questioning
that needs also a revision when confronted
with the new architecture of unified memory
and tensor processing cores.
Bye
Maciej Woźniak schrieb:
On 12/1/2025 12:15 PM, Mild Shock wrote:
Hi,
You wrote:
No, they don't, they just add one (or some)
more layer on top of it.
Techically they are not von Neuman architecture.
Unified Memory with Multiple Tensor Cores is
not von Neuman architecture.
We can use von Neumann architecture
to emulate other architectures, but as long as it
is performed by our computers it is technically
von Neumann's.
On 12/1/2025 5:12 PM, Mild Shock wrote:
Hi,
Simulation is not so easy.
I've never said it is easy. Some randomness
or pseudorandomness existed for a long time,
it's not enough for me to speak about a
different architecture.
On 12/1/2025 5:12 PM, Mild Shock wrote:
Hi,
Simulation is not so easy.
I've never said it is easy. Some randomness
or pseudorandomness existed for a long time,
it's not enough for me to speak about a
different architecture.
Hi,
The bottom line is often, PRAMs might be
closer to physics. Especially for certain
machine learning algorithms or questions
from modelling perception or action. You
might get better results if you model the
problem in terms of Boltzman machines,
or whatever from the arsenal of physics.
Bye
P.S.: Whats was a little popular for a certain
moment of time, was also the idea of partical
swarm optimization, for machine learning or
for problem solving:
Particle swarm optimization https://en.wikipedia.org/wiki/Particle_swarm_optimization
Not sure how much of it got supperseeded,
it mostlikey survides in AlphaEvolve by Google,
looks like a genetic algorithm thing, which
is another name for this "physics".
Maciej Woźniak schrieb:
On 12/1/2025 5:12 PM, Mild Shock wrote:
Hi,
Simulation is not so easy.
I've never said it is easy. Some randomness
or pseudorandomness existed for a long time,
it's not enough for me to speak about a
different architecture.
On 12/1/25 6:08 AM, Mild Shock wrote:thus not something that can be "simulated"
Hi,
Quizz: How much neurons are necessary in the
head of turning machine, to simulate ZFC?
Which is just a category error, as ZFC is a set of definitions, and
Also, "Turning Machines" (if you mean Turing Machines) don't have"neurons".
https://www.ingo-blechschmidt.eu/assets/bachelor-thesis-undecidability- bb748.pdf
You have possibly to look up some modelling
of the logic of ZFC by Bernays. Don't know the
details but maybe check out:
The Undecidability of BB(748)
Understanding Godels Incompleteness Theorems
Johannes Riebel - March 2023
"neurons".
Bye
But that "Modeling" isn't the sort of thing you "simulate".
One problem is we haven't found a way to actually "reason" with
Hi,
Quizz: How much neurons are necessary in the
head of turning machine, to simulate ZFC?
You have possibly to look up some modelling
of the logic of ZFC by Bernays. Don't know the
details but maybe check out:
The Undecidability of BB(748)
Understanding Godels Incompleteness Theorems
Johannes Riebel - March 2023 https://www.ingo-blechschmidt.eu/assets/bachelor-thesis-undecidability-bb748.pdf
Bye
Mild Shock schrieb:
Hi,
I am doing the wake-up call until everybody
gets ear-bleeding. It just too cringe to
see the symbolics computing morons struggle
with connectionism. But given that humans
have a brain with neurons, it should be obvious
that symbolism and connectionism are just two
sides of the same coin.
Good Luck!
Bye
Mild Shock schrieb:
Hi,
1) Classical computing = Boolean logic + von Neumann architecture
For decades, all mainstream computation was built on:
Boolean algebra
Logic gates
Scalar operations executed sequentially
Memory and compute as separate blocks
Even floating-point arithmetic was implemented on top of Boolean logic.
This shaped how programmers think — algorithms expressed
as symbolic operations, control flow, and discrete steps.
2) AI accelerators break from that model
Modern accelerators — GPUs, TPUs, NPUs, and custom matrix
engines — use a different computational substrate:
Instead of Boolean logic:
→ Bulk linear algebra over vectors/tensors
Instead of instruction-by-instruction control:
→ Dataflow graphs
Instead of sequential compute on registers:
→ Massively parallel fused-multiply-add units
Instead of manually orchestrated loops:
→ High-level declarative specs (XLA, MLIR, TVM)
Have Fun!
Bye
Mild Shock schrieb:
Hi,
Wonder why the Coq proof even should be
different from anything that AI could produce.
Its not a typical Euclid proof in a few steps,
it rather uses also enumeration, just like the
Fly Speck proof, for the Keppler Conjecture. So
lets see what happens next, could AlphaEvolve
find the sixth busy beaver?
Bye
P.S.: Here picture of an old Busy Beaver ASIC
(Application-Specific Integrated Circuit)
Application Fun
Technology 1500
Manufacturer VLSI Tech
Type Semester Thesis
Package DIP64
Dimensions 3200μm x 3200μm
Gates 2 kGE
Voltage 5 V
Clock 20 MHz
The Busy Beaver Coprocessor has been designed to solve the Busy
Beaver Function for 5 states. This function (also known as the
Rado's Sigma Function) is an uncomputable problem from information
theory. The input argument is a natural number 'n' that represents
the complexity of an algorithm described as a Turing Machine.
http://asic.ethz.ch/cg/1990/Busy_Beaver.html
Mild Shock schrieb:
Hi,
What we thought:
Prediction 5 . It will never be proved that
Σ(5) = 4,098 and S(5) = 47,176,870.
-- Allen H. Brady, 1990 .
How it started:
To investigate AlphaEvolve’s breadth, we applied
the system to over 50 open problems in mathematical
analysis, geometry, combinatorics and number theory.
The system’s flexibility enabled us to set up most
experiments in a matter of hours. In roughly 75% of
cases, it rediscovered state-of-the-art solutions, to
the best of our knowledge.
https://deepmind.google/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/
How its going:
We prove that S(5) = 47, 176, 870 using the Coq proof
assistant. The Busy Beaver value S(n) is the maximum
number of steps that an n-state 2-symbol Turing machine
can perform from the all-zero tape before halting, and
S was historically introduced by Tibor Radó in 1962 as
one of the simplest examples of an uncomputable function.
The proof enumerates 181,385,789 Turing machines with 5
states and, for each machine, decides whether it halts or
not. Our result marks the first determination of a new
Busy Beaver value in over 40 years and the first Busy
Beaver value ever to be formally verified, attesting to the
effectiveness of massively collaborative online research
https://arxiv.org/pdf/2509.12337
They claim not having used much AI. But could for
example AlphaEvolve do it somehow nevertheless, more or
less autonomously, and find the sixth busy beaver?
Bye
Hi,
Do not underestimate turing machines. I said neurons
in the "head". But a turing machine has two parts a "head"
and a moving "tape". It can then write ZFC formulas on
a "tape". But I haven't studied the proposals yet,
but its from here:
The Undecidability of BB(748)
Understanding Gödel’s Incompleteness Theorems
Johannes Riebel - March 2023 https://www.ingo-blechschmidt.eu/assets/bachelor-thesis-undecidability-bb748.pdf
The problem was proposed already here:
The Busy Beaver Frontier
Scott Aaronson
https://www.scottaaronson.com/papers/bb.pdf
Bye
Richard Damon schrieb:
On 12/1/25 6:08 AM, Mild Shock wrote:
Hi,
Quizz: How much neurons are necessary in the
head of turning machine, to simulate ZFC?
Which is just a category error, as ZFC is a set of definitions, andthus not something that can be "simulated"
Also, "Turning Machines" (if you mean Turing Machines) don't have"neurons".
https://www.ingo-blechschmidt.eu/assets/bachelor-thesis-undecidability- bb748.pdf
You have possibly to look up some modelling
of the logic of ZFC by Bernays. Don't know the
details but maybe check out:
The Undecidability of BB(748)
Understanding Godels Incompleteness Theorems
Johannes Riebel - March 2023
Bye
But that "Modeling" isn't the sort of thing you "simulate".
One problem is we haven't found a way to actually "reason" with"neurons".
Mild Shock schrieb:
Hi,
Quizz: How much neurons are necessary in the
head of turning machine, to simulate ZFC?
You have possibly to look up some modelling
of the logic of ZFC by Bernays. Don't know the
details but maybe check out:
The Undecidability of BB(748)
Understanding Godels Incompleteness Theorems
Johannes Riebel - March 2023
https://www.ingo-blechschmidt.eu/assets/bachelor-thesis-undecidability-bb748.pdf
Bye
Mild Shock schrieb:
Hi,
I am doing the wake-up call until everybody
gets ear-bleeding. It just too cringe to
see the symbolics computing morons struggle
with connectionism. But given that humans
have a brain with neurons, it should be obvious
that symbolism and connectionism are just two
sides of the same coin.
Good Luck!
Bye
Mild Shock schrieb:
Hi,
1) Classical computing = Boolean logic + von Neumann architecture
For decades, all mainstream computation was built on:
Boolean algebra
Logic gates
Scalar operations executed sequentially
Memory and compute as separate blocks
Even floating-point arithmetic was implemented on top of Boolean logic. >>>>
This shaped how programmers think — algorithms expressed
as symbolic operations, control flow, and discrete steps.
2) AI accelerators break from that model
Modern accelerators — GPUs, TPUs, NPUs, and custom matrix
engines — use a different computational substrate:
Instead of Boolean logic:
→ Bulk linear algebra over vectors/tensors
Instead of instruction-by-instruction control:
→ Dataflow graphs
Instead of sequential compute on registers:
→ Massively parallel fused-multiply-add units
Instead of manually orchestrated loops:
→ High-level declarative specs (XLA, MLIR, TVM)
Have Fun!
Bye
Mild Shock schrieb:
Hi,
Wonder why the Coq proof even should be
different from anything that AI could produce.
Its not a typical Euclid proof in a few steps,
it rather uses also enumeration, just like the
Fly Speck proof, for the Keppler Conjecture. So
lets see what happens next, could AlphaEvolve
find the sixth busy beaver?
Bye
P.S.: Here picture of an old Busy Beaver ASIC
(Application-Specific Integrated Circuit)
Application Fun
Technology 1500
Manufacturer VLSI Tech
Type Semester Thesis
Package DIP64
Dimensions 3200μm x 3200μm
Gates 2 kGE
Voltage 5 V
Clock 20 MHz
The Busy Beaver Coprocessor has been designed to solve the Busy
Beaver Function for 5 states. This function (also known as the
Rado's Sigma Function) is an uncomputable problem from information
theory. The input argument is a natural number 'n' that represents
the complexity of an algorithm described as a Turing Machine.
http://asic.ethz.ch/cg/1990/Busy_Beaver.html
Mild Shock schrieb:
Hi,
What we thought:
Prediction 5 . It will never be proved that
Σ(5) = 4,098 and S(5) = 47,176,870.
-- Allen H. Brady, 1990 .
How it started:
To investigate AlphaEvolve’s breadth, we applied
the system to over 50 open problems in mathematical
analysis, geometry, combinatorics and number theory.
The system’s flexibility enabled us to set up most
experiments in a matter of hours. In roughly 75% of
cases, it rediscovered state-of-the-art solutions, to
the best of our knowledge.
https://deepmind.google/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/
How its going:
We prove that S(5) = 47, 176, 870 using the Coq proof
assistant. The Busy Beaver value S(n) is the maximum
number of steps that an n-state 2-symbol Turing machine
can perform from the all-zero tape before halting, and
S was historically introduced by Tibor Radó in 1962 as
one of the simplest examples of an uncomputable function.
The proof enumerates 181,385,789 Turing machines with 5
states and, for each machine, decides whether it halts or
not. Our result marks the first determination of a new
Busy Beaver value in over 40 years and the first Busy
Beaver value ever to be formally verified, attesting to the
effectiveness of massively collaborative online research
https://arxiv.org/pdf/2509.12337
They claim not having used much AI. But could for
example AlphaEvolve do it somehow nevertheless, more or
less autonomously, and find the sixth busy beaver?
Bye
Hi,
The head of a turing machine is usually a finite
state machine. That digests the tape reading, and
creates a new top writing or head movement.
A finite state machines complexity can be measured
in the number of states. Transitions between states
are labeled with tape reading and tap wrinting/
head movement. So the state is not what is writte
on the tape. Its an internal state. Its relatively
easy to turn a finite state machine, into an
artificial neural network. Already ChatGPT does that,
when reads tokens and writes tokens, just like
a turning machine.
"A Turing machine is a mathematical model of
computation describing an abstract machine that
manipulates symbols on a strip of tape according
to a table of rules"
https://en.wikipedia.org/wiki/Turing_machine
Its really funnny how people really need some
ear bleeding to understand the two sides,
symbolism and connectionsim.
Have Fun!
Bye
Mild Shock schrieb:
Hi,
Do not underestimate turing machines. I said neurons
in the "head". But a turing machine has two parts a "head"
and a moving "tape". It can then write ZFC formulas on
a "tape". But I haven't studied the proposals yet,
but its from here:
The Undecidability of BB(748)
Understanding Gödel’s Incompleteness Theorems
Johannes Riebel - March 2023
https://www.ingo-blechschmidt.eu/assets/bachelor-thesis-undecidability-bb748.pdf
The problem was proposed already here:
The Busy Beaver Frontier
Scott Aaronson
https://www.scottaaronson.com/papers/bb.pdf
Bye
Richard Damon schrieb:
On 12/1/25 6:08 AM, Mild Shock wrote:thus not something that can be "simulated"
Hi,;
;
Quizz: How much neurons are necessary in the
head of turning machine, to simulate ZFC?
Which is just a category error, as ZFC is a set of definitions, and
;"neurons".
Also, "Turning Machines" (if you mean Turing Machines) don't have
;https://www.ingo-blechschmidt.eu/assets/bachelor-thesis-undecidability- bb748.pdf
;
You have possibly to look up some modelling
of the logic of ZFC by Bernays. Don't know the
;
details but maybe check out:
;
The Undecidability of BB(748)
Understanding Godels Incompleteness Theorems
Johannes Riebel - March 2023
"neurons".;;
Bye
But that "Modeling" isn't the sort of thing you "simulate".
;
One problem is we haven't found a way to actually "reason" with
Mild Shock schrieb:
Hi,
Quizz: How much neurons are necessary in the
head of turning machine, to simulate ZFC?
You have possibly to look up some modelling
of the logic of ZFC by Bernays. Don't know the
details but maybe check out:
The Undecidability of BB(748)
Understanding Godels Incompleteness Theorems
Johannes Riebel - March 2023
https://www.ingo-blechschmidt.eu/assets/bachelor-thesis-undecidability-bb748.pdf
Bye
Mild Shock schrieb:
Hi,
I am doing the wake-up call until everybody
gets ear-bleeding. It just too cringe to
see the symbolics computing morons struggle
with connectionism. But given that humans
have a brain with neurons, it should be obvious
that symbolism and connectionism are just two
sides of the same coin.
Good Luck!
Bye
Mild Shock schrieb:
Hi,
1) Classical computing = Boolean logic + von Neumann architecture
For decades, all mainstream computation was built on:
Boolean algebra
Logic gates
Scalar operations executed sequentially
Memory and compute as separate blocks
Even floating-point arithmetic was implemented on top of Boolean
logic.
This shaped how programmers think — algorithms expressed
as symbolic operations, control flow, and discrete steps.
2) AI accelerators break from that model
Modern accelerators — GPUs, TPUs, NPUs, and custom matrix
engines — use a different computational substrate:
Instead of Boolean logic:
→ Bulk linear algebra over vectors/tensors
Instead of instruction-by-instruction control:
→ Dataflow graphs
Instead of sequential compute on registers:
→ Massively parallel fused-multiply-add units
Instead of manually orchestrated loops:
→ High-level declarative specs (XLA, MLIR, TVM)
Have Fun!
Bye
Mild Shock schrieb:
Hi,
Wonder why the Coq proof even should be
different from anything that AI could produce.
Its not a typical Euclid proof in a few steps,
it rather uses also enumeration, just like the
Fly Speck proof, for the Keppler Conjecture. So
lets see what happens next, could AlphaEvolve
find the sixth busy beaver?
Bye
P.S.: Here picture of an old Busy Beaver ASIC
(Application-Specific Integrated Circuit)
Application Fun
Technology 1500
Manufacturer VLSI Tech
Type Semester Thesis
Package DIP64
Dimensions 3200μm x 3200μm
Gates 2 kGE
Voltage 5 V
Clock 20 MHz
The Busy Beaver Coprocessor has been designed to solve the Busy
Beaver Function for 5 states. This function (also known as the
Rado's Sigma Function) is an uncomputable problem from information >>>>>> theory. The input argument is a natural number 'n' that represents >>>>>> the complexity of an algorithm described as a Turing Machine.
http://asic.ethz.ch/cg/1990/Busy_Beaver.html
Mild Shock schrieb:
Hi,
What we thought:
Prediction 5 . It will never be proved that
Σ(5) = 4,098 and S(5) = 47,176,870.
-- Allen H. Brady, 1990 .
How it started:
To investigate AlphaEvolve’s breadth, we applied
the system to over 50 open problems in mathematical
analysis, geometry, combinatorics and number theory.
The system’s flexibility enabled us to set up most
experiments in a matter of hours. In roughly 75% of
cases, it rediscovered state-of-the-art solutions, to
the best of our knowledge.
https://deepmind.google/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/
How its going:
We prove that S(5) = 47, 176, 870 using the Coq proof
assistant. The Busy Beaver value S(n) is the maximum
number of steps that an n-state 2-symbol Turing machine
can perform from the all-zero tape before halting, and
S was historically introduced by Tibor Radó in 1962 as
one of the simplest examples of an uncomputable function.
The proof enumerates 181,385,789 Turing machines with 5
states and, for each machine, decides whether it halts or
not. Our result marks the first determination of a new
Busy Beaver value in over 40 years and the first Busy
Beaver value ever to be formally verified, attesting to the
effectiveness of massively collaborative online research
https://arxiv.org/pdf/2509.12337
They claim not having used much AI. But could for
example AlphaEvolve do it somehow nevertheless, more or
less autonomously, and find the sixth busy beaver?
Bye
On 12/2/25 11:06 AM, Mild Shock wrote:as used in Computation theory.
Hi,
Do not underestimate turing machines. I said neurons
in the "head". But a turing machine has to parts a "head"
and a moving "tape". It can then write ZFC formulas on
I think your problem is you just don't understand what computing is,
Hi,
If you know BB(N), you have a halting decision procedure
for N-turing machines. Since if BB(N) is maximum number
S(N) of steps before halting,
you can just run an arbitrary turing machine, and when
its steps exceeds S(N), you know its not a halting
turing machine.
So knowing BB(N) makes the halting problem decidable.
But the halting problem is not decidable. So there
must be some M maybe where BB(M) has no S(N) , no
maximum. Idea is to construct turing machines that
relate to consistency problems, consistency problems
can be even harder than halting problems, we might
ask for the opposite, does a program never halt.
Since never halt could be interpreted that no
inconsistency is derived. Again knowing BB(N) would
help, since dedidability via S(N) is established both
ways, saying "Yes" to halt, and saying "No" to not halt.
So we can show a reducibility from consistency
to busy beaver, I guess.
Bye
On 12/1/2025 12:15 PM, Mild Shock wrote:
Hi,
You wrote:
No, they don't, they just add one (or some)
more layer on top of it.
Techically they are not von Neuman architecture.
Unified Memory with Multiple Tensor Cores is
not von Neuman architecture.
We can use von Neumann architecture
to emulate other architectures, but as long as it
is performed by our computers it is technically
von Neumann's.
Am Montag000001, 01.12.2025 um 13:23 schrieb Maciej Woźniak:
On 12/1/2025 12:15 PM, Mild Shock wrote:Did you know, that 'von Neuman architecture' was actually invented and patented by Konrad Zuse in Germany in the early 1930th?
Hi,
You wrote:
No, they don't, they just add one (or some)
more layer on top of it.
Techically they are not von Neuman architecture.
Unified Memory with Multiple Tensor Cores is
not von Neuman architecture.
We can use von Neumann architecture
to emulate other architectures, but as long as it
is performed by our computers it is technically
von Neumann's.
The liberators stole it from Zuse (like zillions of other patents from
other German inventors).
Am Montag000001, 01.12.2025 um 13:23 schrieb Maciej Woźniak:
On 12/1/2025 12:15 PM, Mild Shock wrote:Did you know, that 'von Neuman architecture'
You wrote:
No, they don't, they just add one (or some)
more layer on top of it.
Techically they are not von Neuman architecture.
Unified Memory with Multiple Tensor Cores is
not von Neuman architecture.
We can use von Neumann architecture
to emulate other architectures, but as long as it
is performed by our computers it is technically
von Neumann's.
was actually invented and patented by Konrad Zuse in Germany in the early 1930th?
The liberators stole it from Zuse (like zillions of other patents from
other German inventors).
| Sysop: | DaiTengu |
|---|---|
| Location: | Appleton, WI |
| Users: | 1,090 |
| Nodes: | 10 (0 / 10) |
| Uptime: | 158:12:48 |
| Calls: | 13,922 |
| Files: | 187,021 |
| D/L today: |
221 files (58,560K bytes) |
| Messages: | 2,457,273 |