Hello, Ben.
Ben Bacarisse <ben@bsb.me.uk> wrote:
Alan Mackenzie <acm@muc.de> writes:
[ .... ]
In comp.theory olcott <polcott333@gmail.com> wrote:...
On 7/21/2025 10:52 AM, Alan Mackenzie wrote:
More seriously, you told Ben Bacarisse on this newsgroup that you had >>>>> fully worked out turing machines which broke a proof of the Halting
Theorem. It transpired you were lying.
Just for the record, here is what PO said late 2018 early 2019:
On 12/14/2018 5:27 PM, peteolcott wrote that he had
"encoded all of the exact TMD [Turing Machine Description]
instructions of the Linz Turing machine H that correctly decides
halting for its fully encoded input pair: (Ĥ, Ĥ)."
Date: Sat, 15 Dec 2018 11:03:21 -0600
"Everyone has claimed that H on input pair (Ĥ, Ĥ) meeting the Linz
specs does not exist. I now have a fully encoded pair of Turing
Machines H / Ĥ proving them wrong."
Date: Sat, 15 Dec 2018 01:28:22 -0600
"I now have an actual H that decides actual halting for an actual (Ĥ,
Ĥ) input pair. I have to write the UTM to execute this code, that
should not take very long. The key thing is the H and Ĥ are 100%
fully encoded as actual Turing machines."
Date: Sun, 16 Dec 2018 09:02:50 -0600
"I am waiting to encode the UTM in C++ so that I can actually execute
H on the input pair: (Ĥ, Ĥ). This should take a week or two [...] it
is exactly and precisely the Peter Linz H and Ĥ, with H actually
deciding input pair: (Ĥ, Ĥ)"
Date: Fri, 11 Jan 2019 16:24:36 -0600
"I provide the exact ⊢* wildcard states after the Linz H.q0 and after >> Ĥ.qx (Linz incorrectly uses q0 twice) showing exactly how the actual
Linz H would correctly decide the actual Linz (Ĥ, Ĥ)."
Thanks for clarifying that.
I think I can understand a bit what it must feel like to be on the
receiving end of all this. Firstly you know through training that what you're being told is definitely false, but on the other hand you don't
like to believe that somebody is lying; somehow you give them the
(temporary) benefit of the doubt. Then comes the depressing restoration
of truth and reality.
When the topic came up again for
discussion, you failed to deny writing the original lie.
That is the closest thing to a lie that I ever said.
When I said this I was actually meaning that I had
fully operational C code that is equivalent to a
Turing Machine.
I think it was a full blown lie intended to deceive. Did you ever
apologise to Ben for leading him up the garden path like that?
No, never. In fact he kept insulting me until it became so egregious
that I decided to having nothing more to do with him.
Somehow, that doesn't surprise me. I only post a little on this group
now (I never really posted much more) for similar reasons. I care about
the truth, including mathematical truth; although I've never specialised
in computation theory or mathematical logic, I care when these are
falsified by ignorant posters.
What really got my goat this time around was PO stridently and
hypocritically accusing others of being liars, given his own record.
What he did do was take months to slowly walk back the claim he made in
December 2018. H and Ĥ became "virtual machines" and then started to be
"sufficiently equivalent" to Linz's H and Ĥ rather the "exactly and
precisely the Peter Linz H and Ĥ". By Sep 2020 he didn't even have it
anymore:
"I will soon have a partial halt decider sufficiently equivalent to
the Linz H correctly deciding halting on the Linz Ĥ"
It took nearly two years to walk back the clear and explicit claim to
this vague and ill-defined claim of not having something!
Yes. I've watched the latter part of this process.
You have not and never have had "fully operational C code" that breaks a >>> proof of the Halting Theorem. To say you had this, when you clearly
didn't, was a lie.
He also tried to pretend that the C code (which, as you say, he didn't
have) is what he always meant when he wrote the words I quoted above. I
defy anyone to read those words with PO's later claim that he meant C
code all along and not conclude that he was just lying again to try to
save some little face.
What amazes me is he somehow thinks that theorems don't apply to him.
Of course, he doesn't understand what a theorem is, somehow construing
it as somebody's opinion. If it's just opinion, then his contrasting
opinion must be "just as good". Or something like that.
C code does not have "TMD instructions" that can be encoded. TMs (as in
Linz) do. When executed, C code has no "exact ⊢* wildcard states after
the Linz H.q0" for PO to show. A TM would. C code does not need a UTM
to execute it (a TM does) and if he really meant that he had C code all
along, does anyone think he could write a UTM for C in "a week or two"?
It is so patently obvious that he just had a manic episode in Dec 2018
that caused he to post all those exuberant claims, and so patently
obvious that he simply can't admit being wrong about anything that I
ended up feeling rather sorry for him -- until the insults started up
again.
That's another reason I don't post much, here. I really don't feel like being insulted by somebody of PO's intellectual stature.
Have a good Sunday!
--
Ben.
On 7/26/2025 12:31 PM, Alan Mackenzie wrote:No you haven't, the subject matter is too far beyond your intellectual capacity.
Hello, Ben.The error of all of the halting problem proofs is
Ben Bacarisse <ben@bsb.me.uk> wrote:
Alan Mackenzie <acm@muc.de> writes:[ .... ]
Thanks for clarifying that.In comp.theory olcott <polcott333@gmail.com> wrote:...
On 7/21/2025 10:52 AM, Alan Mackenzie wrote:
Just for the record, here is what PO said late 2018 early 2019:More seriously, you told Ben Bacarisse on this newsgroup that you had >>>>>> fully worked out turing machines which broke a proof of the Halting >>>>>> Theorem. It transpired you were lying.
On 12/14/2018 5:27 PM, peteolcott wrote that he had
"encoded all of the exact TMD [Turing Machine Description]
instructions of the Linz Turing machine H that correctly decides
halting for its fully encoded input pair: (Ĥ, Ĥ)."
Date: Sat, 15 Dec 2018 11:03:21 -0600
"Everyone has claimed that H on input pair (Ĥ, Ĥ) meeting the Linz
specs does not exist. I now have a fully encoded pair of Turing
Machines H / Ĥ proving them wrong."
Date: Sat, 15 Dec 2018 01:28:22 -0600
"I now have an actual H that decides actual halting for an actual (Ĥ, >>> Ĥ) input pair. I have to write the UTM to execute this code, that
should not take very long. The key thing is the H and Ĥ are 100%
fully encoded as actual Turing machines."
Date: Sun, 16 Dec 2018 09:02:50 -0600
"I am waiting to encode the UTM in C++ so that I can actually execute >>> H on the input pair: (Ĥ, Ĥ). This should take a week or two [...] it >>> is exactly and precisely the Peter Linz H and Ĥ, with H actually
deciding input pair: (Ĥ, Ĥ)"
Date: Fri, 11 Jan 2019 16:24:36 -0600
"I provide the exact ⊢* wildcard states after the Linz H.q0 and after >>> Ĥ.qx (Linz incorrectly uses q0 twice) showing exactly how the actual >>> Linz H would correctly decide the actual Linz (Ĥ, Ĥ)."
I think I can understand a bit what it must feel like to be on the
receiving end of all this. Firstly you know through training that what
you're being told is definitely false, but on the other hand you don't
like to believe that somebody is lying; somehow you give them the
(temporary) benefit of the doubt. Then comes the depressing restoration
of truth and reality.
Somehow, that doesn't surprise me. I only post a little on this groupNo, never. In fact he kept insulting me until it became so egregiousI think it was a full blown lie intended to deceive. Did you everWhen the topic came up again forThat is the closest thing to a lie that I ever said.
discussion, you failed to deny writing the original lie.
When I said this I was actually meaning that I had
fully operational C code that is equivalent to a
Turing Machine.
apologise to Ben for leading him up the garden path like that?
that I decided to having nothing more to do with him.
now (I never really posted much more) for similar reasons. I care about
the truth, including mathematical truth; although I've never specialised
in computation theory or mathematical logic, I care when these are
falsified by ignorant posters.
What really got my goat this time around was PO stridently and
hypocritically accusing others of being liars, given his own record.
What he did do was take months to slowly walk back the claim he made inYes. I've watched the latter part of this process.
December 2018. H and Ĥ became "virtual machines" and then started to be >>> "sufficiently equivalent" to Linz's H and Ĥ rather the "exactly and
precisely the Peter Linz H and Ĥ". By Sep 2020 he didn't even have it
anymore:
"I will soon have a partial halt decider sufficiently equivalent to
the Linz H correctly deciding halting on the Linz Ĥ"
It took nearly two years to walk back the clear and explicit claim to
this vague and ill-defined claim of not having something!
What amazes me is he somehow thinks that theorems don't apply to him.You have not and never have had "fully operational C code" that breaks a >>>> proof of the Halting Theorem. To say you had this, when you clearlyHe also tried to pretend that the C code (which, as you say, he didn't
didn't, was a lie.
have) is what he always meant when he wrote the words I quoted above. I >>> defy anyone to read those words with PO's later claim that he meant C
code all along and not conclude that he was just lying again to try to
save some little face.
Of course, he doesn't understand what a theorem is, somehow construing
it as somebody's opinion. If it's just opinion, then his contrasting
opinion must be "just as good". Or something like that.
C code does not have "TMD instructions" that can be encoded. TMs (as in >>> Linz) do. When executed, C code has no "exact ⊢* wildcard states after >>> the Linz H.q0" for PO to show. A TM would. C code does not need a UTMThat's another reason I don't post much, here. I really don't feel like
to execute it (a TM does) and if he really meant that he had C code all
along, does anyone think he could write a UTM for C in "a week or two"?
It is so patently obvious that he just had a manic episode in Dec 2018
that caused he to post all those exuberant claims, and so patently
obvious that he simply can't admit being wrong about anything that I
ended up feeling rather sorry for him -- until the insults started up
again.
being insulted by somebody of PO's intellectual stature.
Have a good Sunday!
--
Ben.
that they require a Turing machine halt decider to
report on the behavior of a directly executed
Turing machine.
It is common knowledge that no Turing machine decider
can take another directly executing Turing machine as
an input, thus the above requirement is not precisely
correct.
When we correct the error of this incorrect requirement
it becomes a Turing machine decider indirectly reports
on the behavior of a directly executing Turing machine
through the proxy of a finite string description of this
machine.
Now I have proven and corrected the error of all of the
halting problem proofs.
----
Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer
In comp.theory olcott <polcott333@gmail.com> wrote:
The error of all of the halting problem proofs is
that they require a Turing machine halt decider to
report on the behavior of a directly executed
Turing machine.
It is common knowledge that no Turing machine decider
can take another directly executing Turing machine as
an input, thus the above requirement is not precisely
correct.
When we correct the error of this incorrect requirement
it becomes a Turing machine decider indirectly reports
on the behavior of a directly executing Turing machine
through the proxy of a finite string description of this
machine.
Now I have proven and corrected the error of all of the
halting problem proofs.
No you haven't, the subject matter is too far beyond your intellectual capacity.
In comp.theory olcott <polcott333@gmail.com> wrote:
The error of all of the halting problem proofs is
that they require a Turing machine halt decider to
report on the behavior of a directly executed
Turing machine.
It is common knowledge that no Turing machine decider
can take another directly executing Turing machine as
an input, thus the above requirement is not precisely
correct.
When we correct the error of this incorrect requirement
it becomes a Turing machine decider indirectly reports
on the behavior of a directly executing Turing machine
through the proxy of a finite string description of this
machine.
Now I have proven and corrected the error of all of the
halting problem proofs.
No you haven't, the subject matter is too far beyond your intellectual capacity.
On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
In comp.theory olcott <polcott333@gmail.com> wrote:It only seems to you that I lack understanding because you are so sure
The error of all of the halting problem proofs is that they require a
Turing machine halt decider to report on the behavior of a directly
executed Turing machine.
It is common knowledge that no Turing machine decider can take another
directly executing Turing machine as an input, thus the above
requirement is not precisely correct.
When we correct the error of this incorrect requirement it becomes a
Turing machine decider indirectly reports on the behavior of a
directly executing Turing machine through the proxy of a finite string
description of this machine.
Now I have proven and corrected the error of all of the halting
problem proofs.
No you haven't, the subject matter is too far beyond your intellectual
capacity.
that I must be wrong that you make sure to totally ignore the subtle
nuances of meaning that proves I am correct.
No Turing machine based (at least partial) halt decider can possibly *directly* report on the behavior of any directly executing Turing
machine. The best that any of them can possibly do is indirectly report
on this behavior through the proxy of a finite string machine
description.
On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:
On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
In comp.theory olcott <polcott333@gmail.com> wrote:It only seems to you that I lack understanding because you are so sure
The error of all of the halting problem proofs is that they require a
Turing machine halt decider to report on the behavior of a directly
executed Turing machine.
It is common knowledge that no Turing machine decider can take another >>>> directly executing Turing machine as an input, thus the above
requirement is not precisely correct.
When we correct the error of this incorrect requirement it becomes a
Turing machine decider indirectly reports on the behavior of a
directly executing Turing machine through the proxy of a finite string >>>> description of this machine.
Now I have proven and corrected the error of all of the halting
problem proofs.
No you haven't, the subject matter is too far beyond your intellectual
capacity.
that I must be wrong that you make sure to totally ignore the subtle
nuances of meaning that proves I am correct.
No Turing machine based (at least partial) halt decider can possibly
*directly* report on the behavior of any directly executing Turing
machine. The best that any of them can possibly do is indirectly report
on this behavior through the proxy of a finite string machine
description.
Partial decidability is not a hard problem.
/Flibble
On 7/26/2025 2:52 PM, Mr Flibble wrote:
On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:
On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
In comp.theory olcott <polcott333@gmail.com> wrote:It only seems to you that I lack understanding because you are so sure
The error of all of the halting problem proofs is that they require a >>>>> Turing machine halt decider to report on the behavior of a directly
executed Turing machine.
It is common knowledge that no Turing machine decider can take another >>>>> directly executing Turing machine as an input, thus the above
requirement is not precisely correct.
When we correct the error of this incorrect requirement it becomes a >>>>> Turing machine decider indirectly reports on the behavior of a
directly executing Turing machine through the proxy of a finite string >>>>> description of this machine.
Now I have proven and corrected the error of all of the halting
problem proofs.
No you haven't, the subject matter is too far beyond your intellectual >>>> capacity.
that I must be wrong that you make sure to totally ignore the subtle
nuances of meaning that proves I am correct.
No Turing machine based (at least partial) halt decider can possibly
*directly* report on the behavior of any directly executing Turing
machine. The best that any of them can possibly do is indirectly report >>> on this behavior through the proxy of a finite string machine
description.
Partial decidability is not a hard problem.
/Flibble
My point is that all of the halting problem proofs
are wrong when they require a Turing machine decider
H to report on the behavior of machine M on input i
because machine M is not in the domain of any Turing
machine decider. Only finite strings such as ⟨M⟩ the
Turing machine description of machine M are its
domain.
On 7/26/2025 2:58 PM, olcott wrote:
On 7/26/2025 2:52 PM, Mr Flibble wrote:
On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:
On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
In comp.theory olcott <polcott333@gmail.com> wrote:It only seems to you that I lack understanding because you are so sure >>>> that I must be wrong that you make sure to totally ignore the subtle
The error of all of the halting problem proofs is that they require a >>>>>> Turing machine halt decider to report on the behavior of a directly >>>>>> executed Turing machine.
It is common knowledge that no Turing machine decider can take
another
directly executing Turing machine as an input, thus the above
requirement is not precisely correct.
When we correct the error of this incorrect requirement it becomes a >>>>>> Turing machine decider indirectly reports on the behavior of a
directly executing Turing machine through the proxy of a finite
string
description of this machine.
Now I have proven and corrected the error of all of the halting
problem proofs.
No you haven't, the subject matter is too far beyond your intellectual >>>>> capacity.
nuances of meaning that proves I am correct.
No Turing machine based (at least partial) halt decider can possibly
*directly* report on the behavior of any directly executing Turing
machine. The best that any of them can possibly do is indirectly
report
on this behavior through the proxy of a finite string machine
description.
Partial decidability is not a hard problem.
/Flibble
My point is that all of the halting problem proofs
are wrong when they require a Turing machine decider
H to report on the behavior of machine M on input i
because machine M is not in the domain of any Turing
machine decider. Only finite strings such as ⟨M⟩ the
Turing machine description of machine M are its
domain.
Definition of Turing Machine Ĥ
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
if Ĥ applied to ⟨Ĥ⟩ halts, and // incorrect requirement
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
if Ĥ applied to ⟨Ĥ⟩ does not halt. // incorrect requirement
(a) Ĥ copies its input ⟨Ĥ⟩
(b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
(c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
(d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
(e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
(f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...
The fact that the correctly simulated input
specifies recursive simulation prevents the
simulated ⟨Ĥ⟩ from ever reaching its simulated
final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.
This is not contradicted by the fact that
Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
the domain of every Turing machine computed function.
On 7/26/2025 12:31 PM, Alan Mackenzie wrote:
Hello, Ben.
Ben Bacarisse <ben@bsb.me.uk> wrote:
Alan Mackenzie <acm@muc.de> writes:
[ .... ]
In comp.theory olcott <polcott333@gmail.com> wrote:...
On 7/21/2025 10:52 AM, Alan Mackenzie wrote:
More seriously, you told Ben Bacarisse on this newsgroup that you had >>>>>> fully worked out turing machines which broke a proof of the Halting >>>>>> Theorem. It transpired you were lying.
Just for the record, here is what PO said late 2018 early 2019:
On 12/14/2018 5:27 PM, peteolcott wrote that he had
"encoded all of the exact TMD [Turing Machine Description]
instructions of the Linz Turing machine H that correctly decides
halting for its fully encoded input pair: (Ĥ, Ĥ)."
Date: Sat, 15 Dec 2018 11:03:21 -0600
"Everyone has claimed that H on input pair (Ĥ, Ĥ) meeting the Linz >>> specs does not exist. I now have a fully encoded pair of Turing
Machines H / Ĥ proving them wrong."
Date: Sat, 15 Dec 2018 01:28:22 -0600
"I now have an actual H that decides actual halting for an actual (Ĥ, >>> Ĥ) input pair. I have to write the UTM to execute this code, that >>> should not take very long. The key thing is the H and Ĥ are 100% >>> fully encoded as actual Turing machines."
Date: Sun, 16 Dec 2018 09:02:50 -0600
"I am waiting to encode the UTM in C++ so that I can actually execute >>> H on the input pair: (Ĥ, Ĥ). This should take a week or two [...] it >>> is exactly and precisely the Peter Linz H and Ĥ, with H actually
deciding input pair: (Ĥ, Ĥ)"
Date: Fri, 11 Jan 2019 16:24:36 -0600
"I provide the exact ⊢* wildcard states after the Linz H.q0 and after
Ĥ.qx (Linz incorrectly uses q0 twice) showing exactly how the actual >>> Linz H would correctly decide the actual Linz (Ĥ, Ĥ)."
Thanks for clarifying that.
I think I can understand a bit what it must feel like to be on the
receiving end of all this. Firstly you know through training that what
you're being told is definitely false, but on the other hand you don't
like to believe that somebody is lying; somehow you give them the
(temporary) benefit of the doubt. Then comes the depressing restoration
of truth and reality.
When the topic came up again for
discussion, you failed to deny writing the original lie.
That is the closest thing to a lie that I ever said.
When I said this I was actually meaning that I had
fully operational C code that is equivalent to a
Turing Machine.
I think it was a full blown lie intended to deceive. Did you ever
apologise to Ben for leading him up the garden path like that?
No, never. In fact he kept insulting me until it became so egregious
that I decided to having nothing more to do with him.
Somehow, that doesn't surprise me. I only post a little on this group
now (I never really posted much more) for similar reasons. I care about
the truth, including mathematical truth; although I've never specialised
in computation theory or mathematical logic, I care when these are
falsified by ignorant posters.
What really got my goat this time around was PO stridently and
hypocritically accusing others of being liars, given his own record.
What he did do was take months to slowly walk back the claim he made in
December 2018. H and Ĥ became "virtual machines" and then started to be >>> "sufficiently equivalent" to Linz's H and Ĥ rather the "exactly and
precisely the Peter Linz H and Ĥ". By Sep 2020 he didn't even have it >>> anymore:
"I will soon have a partial halt decider sufficiently equivalent to >>> the Linz H correctly deciding halting on the Linz Ĥ"
It took nearly two years to walk back the clear and explicit claim to
this vague and ill-defined claim of not having something!
Yes. I've watched the latter part of this process.
You have not and never have had "fully operational C code" that
breaks a
proof of the Halting Theorem. To say you had this, when you clearly
didn't, was a lie.
He also tried to pretend that the C code (which, as you say, he didn't
have) is what he always meant when he wrote the words I quoted above. I >>> defy anyone to read those words with PO's later claim that he meant C
code all along and not conclude that he was just lying again to try to
save some little face.
What amazes me is he somehow thinks that theorems don't apply to him.
Of course, he doesn't understand what a theorem is, somehow construing
it as somebody's opinion. If it's just opinion, then his contrasting
opinion must be "just as good". Or something like that.
C code does not have "TMD instructions" that can be encoded. TMs (as in >>> Linz) do. When executed, C code has no "exact ⊢* wildcard states after >>> the Linz H.q0" for PO to show. A TM would. C code does not need a UTM >>> to execute it (a TM does) and if he really meant that he had C code all
along, does anyone think he could write a UTM for C in "a week or two"?
It is so patently obvious that he just had a manic episode in Dec 2018
that caused he to post all those exuberant claims, and so patently
obvious that he simply can't admit being wrong about anything that I
ended up feeling rather sorry for him -- until the insults started up
again.
That's another reason I don't post much, here. I really don't feel like
being insulted by somebody of PO's intellectual stature.
Have a good Sunday!
--
Ben.
The error of all of the halting problem proofs is
that they require a Turing machine halt decider to
report on the behavior of a directly executed
Turing machine.
It is common knowledge that no Turing machine decider
can take another directly executing Turing machine as
an input, thus the above requirement is not precisely
correct.
When we correct the error of this incorrect requirement
it becomes a Turing machine decider indirectly reports
on the behavior of a directly executing Turing machine
through the proxy of a finite string description of this
machine.
Now I have proven and corrected the error of all of the
halting problem proofs.
On 7/26/2025 2:52 PM, Mr Flibble wrote:
On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:
On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
In comp.theory olcott <polcott333@gmail.com> wrote:It only seems to you that I lack understanding because you are so sure
The error of all of the halting problem proofs is that they require a >>>>> Turing machine halt decider to report on the behavior of a directly
executed Turing machine.
It is common knowledge that no Turing machine decider can take another >>>>> directly executing Turing machine as an input, thus the above
requirement is not precisely correct.
When we correct the error of this incorrect requirement it becomes a >>>>> Turing machine decider indirectly reports on the behavior of a
directly executing Turing machine through the proxy of a finite string >>>>> description of this machine.
Now I have proven and corrected the error of all of the halting
problem proofs.
No you haven't, the subject matter is too far beyond your intellectual >>>> capacity.
that I must be wrong that you make sure to totally ignore the subtle
nuances of meaning that proves I am correct.
No Turing machine based (at least partial) halt decider can possibly
*directly* report on the behavior of any directly executing Turing
machine. The best that any of them can possibly do is indirectly report >>> on this behavior through the proxy of a finite string machine
description.
Partial decidability is not a hard problem.
/Flibble
My point is that all of the halting problem proofs
are wrong when they require a Turing machine decider
H to report on the behavior of machine M on input i
because machine M is not in the domain of any Turing
machine decider. Only finite strings such as ⟨M⟩ the
Turing machine description of machine M are its
domain.
On 7/26/2025 2:58 PM, olcott wrote:
On 7/26/2025 2:52 PM, Mr Flibble wrote:
On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:
On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
In comp.theory olcott <polcott333@gmail.com> wrote:It only seems to you that I lack understanding because you are so sure >>>> that I must be wrong that you make sure to totally ignore the subtle
The error of all of the halting problem proofs is that they require a >>>>>> Turing machine halt decider to report on the behavior of a directly >>>>>> executed Turing machine.
It is common knowledge that no Turing machine decider can take
another
directly executing Turing machine as an input, thus the above
requirement is not precisely correct.
When we correct the error of this incorrect requirement it becomes a >>>>>> Turing machine decider indirectly reports on the behavior of a
directly executing Turing machine through the proxy of a finite
string
description of this machine.
Now I have proven and corrected the error of all of the halting
problem proofs.
No you haven't, the subject matter is too far beyond your intellectual >>>>> capacity.
nuances of meaning that proves I am correct.
No Turing machine based (at least partial) halt decider can possibly
*directly* report on the behavior of any directly executing Turing
machine. The best that any of them can possibly do is indirectly
report
on this behavior through the proxy of a finite string machine
description.
Partial decidability is not a hard problem.
/Flibble
My point is that all of the halting problem proofs
are wrong when they require a Turing machine decider
H to report on the behavior of machine M on input i
because machine M is not in the domain of any Turing
machine decider. Only finite strings such as ⟨M⟩ the
Turing machine description of machine M are its
domain.
Definition of Turing Machine Ĥ
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
if Ĥ applied to ⟨Ĥ⟩ halts, and // incorrect requirement
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
if Ĥ applied to ⟨Ĥ⟩ does not halt. // incorrect requirement
(a) Ĥ copies its input ⟨Ĥ⟩
(b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
(c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
(d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
(e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
(f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...
The fact that the correctly simulated input
specifies recursive simulation prevents the
simulated ⟨Ĥ⟩ from ever reaching its simulated
final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.
This is not contradicted by the fact that
Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
the domain of every Turing machine computed function.
On 7/26/25 3:58 PM, olcott wrote:
On 7/26/2025 2:52 PM, Mr Flibble wrote:
On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:
On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
In comp.theory olcott <polcott333@gmail.com> wrote:It only seems to you that I lack understanding because you are so sure >>>> that I must be wrong that you make sure to totally ignore the subtle
The error of all of the halting problem proofs is that they require a >>>>>> Turing machine halt decider to report on the behavior of a directly >>>>>> executed Turing machine.
It is common knowledge that no Turing machine decider can take
another
directly executing Turing machine as an input, thus the above
requirement is not precisely correct.
When we correct the error of this incorrect requirement it becomes a >>>>>> Turing machine decider indirectly reports on the behavior of a
directly executing Turing machine through the proxy of a finite
string
description of this machine.
Now I have proven and corrected the error of all of the halting
problem proofs.
No you haven't, the subject matter is too far beyond your intellectual >>>>> capacity.
nuances of meaning that proves I am correct.
No Turing machine based (at least partial) halt decider can possibly
*directly* report on the behavior of any directly executing Turing
machine. The best that any of them can possibly do is indirectly
report
on this behavior through the proxy of a finite string machine
description.
Partial decidability is not a hard problem.
/Flibble
My point is that all of the halting problem proofs
are wrong when they require a Turing machine decider
H to report on the behavior of machine M on input i
because machine M is not in the domain of any Turing
machine decider. Only finite strings such as ⟨M⟩ the
Turing machine description of machine M are its
domain.
No, you just prove that you are too stupid to understand how
representations work.
On 7/26/2025 5:49 PM, olcott wrote:
On 7/26/2025 2:58 PM, olcott wrote:
On 7/26/2025 2:52 PM, Mr Flibble wrote:
On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:
On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
In comp.theory olcott <polcott333@gmail.com> wrote:It only seems to you that I lack understanding because you are so sure >>>>> that I must be wrong that you make sure to totally ignore the subtle >>>>> nuances of meaning that proves I am correct.
The error of all of the halting problem proofs is that they
require a
Turing machine halt decider to report on the behavior of a directly >>>>>>> executed Turing machine.
It is common knowledge that no Turing machine decider can take
another
directly executing Turing machine as an input, thus the above
requirement is not precisely correct.
When we correct the error of this incorrect requirement it becomes a >>>>>>> Turing machine decider indirectly reports on the behavior of a
directly executing Turing machine through the proxy of a finite >>>>>>> string
description of this machine.
Now I have proven and corrected the error of all of the halting
problem proofs.
No you haven't, the subject matter is too far beyond your
intellectual
capacity.
No Turing machine based (at least partial) halt decider can possibly >>>>> *directly* report on the behavior of any directly executing Turing
machine. The best that any of them can possibly do is indirectly
report
on this behavior through the proxy of a finite string machine
description.
Partial decidability is not a hard problem.
/Flibble
My point is that all of the halting problem proofs
are wrong when they require a Turing machine decider
H to report on the behavior of machine M on input i
because machine M is not in the domain of any Turing
machine decider. Only finite strings such as ⟨M⟩ the
Turing machine description of machine M are its
domain.
Definition of Turing Machine Ĥ
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
if Ĥ applied to ⟨Ĥ⟩ halts, and // incorrect requirement
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
if Ĥ applied to ⟨Ĥ⟩ does not halt. // incorrect requirement >>
(a) Ĥ copies its input ⟨Ĥ⟩
(b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
(c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
(d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
(e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
(f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...
The fact that the correctly simulated input
specifies recursive simulation prevents the
simulated ⟨Ĥ⟩ from ever reaching its simulated
final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.
This is not contradicted by the fact that
Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
the domain of every Turing machine computed function.
In the atypical case where the behavior of the simulation
of an input to a potential halt decider disagrees with the
behavior of the direct execution of the underlying machine
(because this input calls this same simulating decider) it
is the behavior of the input that rules because deciders
compute the mapping for their inputs.
On 7/26/2025 6:18 PM, Richard Damon wrote:
On 7/26/25 3:58 PM, olcott wrote:
On 7/26/2025 2:52 PM, Mr Flibble wrote:
On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:
On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
In comp.theory olcott <polcott333@gmail.com> wrote:It only seems to you that I lack understanding because you are so sure >>>>> that I must be wrong that you make sure to totally ignore the subtle >>>>> nuances of meaning that proves I am correct.
The error of all of the halting problem proofs is that they
require a
Turing machine halt decider to report on the behavior of a directly >>>>>>> executed Turing machine.
It is common knowledge that no Turing machine decider can take
another
directly executing Turing machine as an input, thus the above
requirement is not precisely correct.
When we correct the error of this incorrect requirement it becomes a >>>>>>> Turing machine decider indirectly reports on the behavior of a
directly executing Turing machine through the proxy of a finite >>>>>>> string
description of this machine.
Now I have proven and corrected the error of all of the halting
problem proofs.
No you haven't, the subject matter is too far beyond your
intellectual
capacity.
No Turing machine based (at least partial) halt decider can possibly >>>>> *directly* report on the behavior of any directly executing Turing
machine. The best that any of them can possibly do is indirectly
report
on this behavior through the proxy of a finite string machine
description.
Partial decidability is not a hard problem.
/Flibble
My point is that all of the halting problem proofs
are wrong when they require a Turing machine decider
H to report on the behavior of machine M on input i
because machine M is not in the domain of any Turing
machine decider. Only finite strings such as ⟨M⟩ the
Turing machine description of machine M are its
domain.
No, you just prove that you are too stupid to understand how
representations work.
No is it that you are too stupid to understand WHY
they don't always work.
On 7/26/25 7:08 PM, olcott wrote:
On 7/26/2025 5:49 PM, olcott wrote:
On 7/26/2025 2:58 PM, olcott wrote:
On 7/26/2025 2:52 PM, Mr Flibble wrote:
On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:
On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
In comp.theory olcott <polcott333@gmail.com> wrote:It only seems to you that I lack understanding because you are so >>>>>> sure
The error of all of the halting problem proofs is that they
require a
Turing machine halt decider to report on the behavior of a directly >>>>>>>> executed Turing machine.
It is common knowledge that no Turing machine decider can take >>>>>>>> another
directly executing Turing machine as an input, thus the above
requirement is not precisely correct.
When we correct the error of this incorrect requirement it
becomes a
Turing machine decider indirectly reports on the behavior of a >>>>>>>> directly executing Turing machine through the proxy of a finite >>>>>>>> string
description of this machine.
Now I have proven and corrected the error of all of the halting >>>>>>>> problem proofs.
No you haven't, the subject matter is too far beyond your
intellectual
capacity.
that I must be wrong that you make sure to totally ignore the subtle >>>>>> nuances of meaning that proves I am correct.
No Turing machine based (at least partial) halt decider can possibly >>>>>> *directly* report on the behavior of any directly executing Turing >>>>>> machine. The best that any of them can possibly do is indirectly >>>>>> report
on this behavior through the proxy of a finite string machine
description.
Partial decidability is not a hard problem.
/Flibble
My point is that all of the halting problem proofs
are wrong when they require a Turing machine decider
H to report on the behavior of machine M on input i
because machine M is not in the domain of any Turing
machine decider. Only finite strings such as ⟨M⟩ the
Turing machine description of machine M are its
domain.
Definition of Turing Machine Ĥ
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
if Ĥ applied to ⟨Ĥ⟩ halts, and // incorrect requirement
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
if Ĥ applied to ⟨Ĥ⟩ does not halt. // incorrect requirement
(a) Ĥ copies its input ⟨Ĥ⟩
(b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
(c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
(d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
(e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
(f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...
The fact that the correctly simulated input
specifies recursive simulation prevents the
simulated ⟨Ĥ⟩ from ever reaching its simulated
final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.
This is not contradicted by the fact that
Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
the domain of every Turing machine computed function.
In the atypical case where the behavior of the simulation
of an input to a potential halt decider disagrees with the
behavior of the direct execution of the underlying machine
(because this input calls this same simulating decider) it
is the behavior of the input that rules because deciders
compute the mapping for their inputs.
Nope, just more of your lies.
The behavior of an input to a halt decider is DEFINED in all cases to be
the behavior of the machine the input represents,
On 7/26/2025 6:35 PM, Richard Damon wrote:
On 7/26/25 7:08 PM, olcott wrote:
On 7/26/2025 5:49 PM, olcott wrote:
On 7/26/2025 2:58 PM, olcott wrote:
On 7/26/2025 2:52 PM, Mr Flibble wrote:
On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:
On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
In comp.theory olcott <polcott333@gmail.com> wrote:It only seems to you that I lack understanding because you are so >>>>>>> sure
The error of all of the halting problem proofs is that they >>>>>>>>> require a
Turing machine halt decider to report on the behavior of a
directly
executed Turing machine.
It is common knowledge that no Turing machine decider can take >>>>>>>>> another
directly executing Turing machine as an input, thus the above >>>>>>>>> requirement is not precisely correct.
When we correct the error of this incorrect requirement it
becomes a
Turing machine decider indirectly reports on the behavior of a >>>>>>>>> directly executing Turing machine through the proxy of a finite >>>>>>>>> string
description of this machine.
Now I have proven and corrected the error of all of the halting >>>>>>>>> problem proofs.
No you haven't, the subject matter is too far beyond your
intellectual
capacity.
that I must be wrong that you make sure to totally ignore the subtle >>>>>>> nuances of meaning that proves I am correct.
No Turing machine based (at least partial) halt decider can possibly >>>>>>> *directly* report on the behavior of any directly executing Turing >>>>>>> machine. The best that any of them can possibly do is indirectly >>>>>>> report
on this behavior through the proxy of a finite string machine
description.
Partial decidability is not a hard problem.
/Flibble
My point is that all of the halting problem proofs
are wrong when they require a Turing machine decider
H to report on the behavior of machine M on input i
because machine M is not in the domain of any Turing
machine decider. Only finite strings such as ⟨M⟩ the
Turing machine description of machine M are its
domain.
Definition of Turing Machine Ĥ
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
if Ĥ applied to ⟨Ĥ⟩ halts, and // incorrect requirement
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
if Ĥ applied to ⟨Ĥ⟩ does not halt. // incorrect requirement
(a) Ĥ copies its input ⟨Ĥ⟩
(b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
(c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
(d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
(e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
(f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...
The fact that the correctly simulated input
specifies recursive simulation prevents the
simulated ⟨Ĥ⟩ from ever reaching its simulated
final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.
This is not contradicted by the fact that
Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
the domain of every Turing machine computed function.
In the atypical case where the behavior of the simulation
of an input to a potential halt decider disagrees with the
behavior of the direct execution of the underlying machine
(because this input calls this same simulating decider) it
is the behavior of the input that rules because deciders
compute the mapping for their inputs.
Nope, just more of your lies.
The behavior of an input to a halt decider is DEFINED in all cases to
be the behavior of the machine the input represents,
Yet I have conclusively proven otherwise and
you are too stupid to understand the proof.
You are so stupid that you think you can get
away with disagreeing with the x86 language.
_DDD()
[00002192] 55 push ebp
[00002193] 8bec mov ebp,esp
[00002195] 6892210000 push 00002192 // push DDD
[0000219a] e833f4ffff call 000015d2 // call HHH
[0000219f] 83c404 add esp,+04
[000021a2] 5d pop ebp
[000021a3] c3 ret
Size in bytes:(0018) [000021a3]
DDD simulated by HHH according to the rules of the
x86 language does not fucking halt you fucking moron.
If any definition says otherwise then this definition
is fucked up.
On 7/26/25 7:43 PM, olcott wrote:
On 7/26/2025 6:35 PM, Richard Damon wrote:
On 7/26/25 7:08 PM, olcott wrote:
On 7/26/2025 5:49 PM, olcott wrote:
On 7/26/2025 2:58 PM, olcott wrote:
On 7/26/2025 2:52 PM, Mr Flibble wrote:
On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:
On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
In comp.theory olcott <polcott333@gmail.com> wrote:It only seems to you that I lack understanding because you are >>>>>>>> so sure
The error of all of the halting problem proofs is that they >>>>>>>>>> require a
Turing machine halt decider to report on the behavior of a >>>>>>>>>> directly
executed Turing machine.
It is common knowledge that no Turing machine decider can take >>>>>>>>>> another
directly executing Turing machine as an input, thus the above >>>>>>>>>> requirement is not precisely correct.
When we correct the error of this incorrect requirement it >>>>>>>>>> becomes a
Turing machine decider indirectly reports on the behavior of a >>>>>>>>>> directly executing Turing machine through the proxy of a
finite string
description of this machine.
Now I have proven and corrected the error of all of the halting >>>>>>>>>> problem proofs.
No you haven't, the subject matter is too far beyond your
intellectual
capacity.
that I must be wrong that you make sure to totally ignore the >>>>>>>> subtle
nuances of meaning that proves I am correct.
No Turing machine based (at least partial) halt decider can
possibly
*directly* report on the behavior of any directly executing Turing >>>>>>>> machine. The best that any of them can possibly do is
indirectly report
on this behavior through the proxy of a finite string machine
description.
Partial decidability is not a hard problem.
/Flibble
My point is that all of the halting problem proofs
are wrong when they require a Turing machine decider
H to report on the behavior of machine M on input i
because machine M is not in the domain of any Turing
machine decider. Only finite strings such as ⟨M⟩ the
Turing machine description of machine M are its
domain.
Definition of Turing Machine Ĥ
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
if Ĥ applied to ⟨Ĥ⟩ halts, and // incorrect requirement
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
if Ĥ applied to ⟨Ĥ⟩ does not halt. // incorrect requirement
(a) Ĥ copies its input ⟨Ĥ⟩
(b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
(c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
(d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
(e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ >>>>> (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...
The fact that the correctly simulated input
specifies recursive simulation prevents the
simulated ⟨Ĥ⟩ from ever reaching its simulated
final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.
This is not contradicted by the fact that
Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
the domain of every Turing machine computed function.
In the atypical case where the behavior of the simulation
of an input to a potential halt decider disagrees with the
behavior of the direct execution of the underlying machine
(because this input calls this same simulating decider) it
is the behavior of the input that rules because deciders
compute the mapping for their inputs.
Nope, just more of your lies.
The behavior of an input to a halt decider is DEFINED in all cases to
be the behavior of the machine the input represents,
Yet I have conclusively proven otherwise and
you are too stupid to understand the proof.
No, because you proof needs to call different inputs the same or partial simulaiton to be correct.
On 7/26/2025 8:30 PM, Richard Damon wrote:
On 7/26/25 7:43 PM, olcott wrote:
On 7/26/2025 6:35 PM, Richard Damon wrote:
On 7/26/25 7:08 PM, olcott wrote:
On 7/26/2025 5:49 PM, olcott wrote:
On 7/26/2025 2:58 PM, olcott wrote:
On 7/26/2025 2:52 PM, Mr Flibble wrote:
On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:
On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
In comp.theory olcott <polcott333@gmail.com> wrote:
The error of all of the halting problem proofs is that they
require a
Turing machine halt decider to report on the behavior of a
directly
executed Turing machine.
It is common knowledge that no Turing machine decider can take
another
directly executing Turing machine as an input, thus the above
requirement is not precisely correct.
When we correct the error of this incorrect requirement it
becomes a
Turing machine decider indirectly reports on the behavior of a
directly executing Turing machine through the proxy of a finite string
description of this machine.
Now I have proven and corrected the error of all of the halting
problem proofs.
No you haven't, the subject matter is too far beyond your intellectual
capacity.
It only seems to you that I lack understanding because you are
so sure
that I must be wrong that you make sure to totally ignore the
subtle
nuances of meaning that proves I am correct.
No Turing machine based (at least partial) halt decider can possibly
*directly* report on the behavior of any directly executing Turing
machine. The best that any of them can possibly do is indirectly report
on this behavior through the proxy of a finite string machine description.
Partial decidability is not a hard problem.
/Flibble
My point is that all of the halting problem proofs
are wrong when they require a Turing machine decider
H to report on the behavior of machine M on input i
because machine M is not in the domain of any Turing
machine decider. Only finite strings such as ⟨M⟩ the
Turing machine description of machine M are its
domain.
Definition of Turing Machine Ĥ
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞, if Ĥ applied to ⟨Ĥ⟩ halts, and // incorrect requirement
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
if Ĥ applied to ⟨Ĥ⟩ does not halt. // incorrect requirement
(a) Ĥ copies its input ⟨Ĥ⟩
(b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
(c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
(d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
(e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
(f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...
The fact that the correctly simulated input
specifies recursive simulation prevents the
simulated ⟨Ĥ⟩ from ever reaching its simulated
final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.
This is not contradicted by the fact that
Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
the domain of every Turing machine computed function.
In the atypical case where the behavior of the simulation
of an input to a potential halt decider disagrees with the
behavior of the direct execution of the underlying machine
(because this input calls this same simulating decider) it
is the behavior of the input that rules because deciders
compute the mapping for their inputs.
Nope, just more of your lies.
The behavior of an input to a halt decider is DEFINED in all cases to be the behavior of the machine the input represents,
Yet I have conclusively proven otherwise and
you are too stupid to understand the proof.
No, because you proof needs to call different inputs the same or partial simulaiton to be correct.
When HHH(DDD) simulates DDD it also simulates itselfIt is you who proved yourself an idiot, worse, a liar, EVERYDAY.
simulating DDD because DDD calls HHH(DDD).
When HHH1(DDD) simulates DDD DOES NOT simulate itself
simulating DDD because DDD DOES NOT CALL HHH1(DDD).
For three fucking years everyone here pretended that
they could NOT fucking see that.
On 7/26/2025 8:30 PM, Richard Damon wrote:
On 7/26/25 7:43 PM, olcott wrote:
On 7/26/2025 6:35 PM, Richard Damon wrote:
On 7/26/25 7:08 PM, olcott wrote:
On 7/26/2025 5:49 PM, olcott wrote:
On 7/26/2025 2:58 PM, olcott wrote:
On 7/26/2025 2:52 PM, Mr Flibble wrote:
On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:
On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
In comp.theory olcott <polcott333@gmail.com> wrote:It only seems to you that I lack understanding because you are >>>>>>>>> so sure
The error of all of the halting problem proofs is that they >>>>>>>>>>> require a
Turing machine halt decider to report on the behavior of a >>>>>>>>>>> directly
executed Turing machine.
It is common knowledge that no Turing machine decider can >>>>>>>>>>> take another
directly executing Turing machine as an input, thus the above >>>>>>>>>>> requirement is not precisely correct.
When we correct the error of this incorrect requirement it >>>>>>>>>>> becomes a
Turing machine decider indirectly reports on the behavior of a >>>>>>>>>>> directly executing Turing machine through the proxy of a >>>>>>>>>>> finite string
description of this machine.
Now I have proven and corrected the error of all of the halting >>>>>>>>>>> problem proofs.
No you haven't, the subject matter is too far beyond your >>>>>>>>>> intellectual
capacity.
that I must be wrong that you make sure to totally ignore the >>>>>>>>> subtle
nuances of meaning that proves I am correct.
No Turing machine based (at least partial) halt decider can >>>>>>>>> possibly
*directly* report on the behavior of any directly executing Turing >>>>>>>>> machine. The best that any of them can possibly do is
indirectly report
on this behavior through the proxy of a finite string machine >>>>>>>>> description.
Partial decidability is not a hard problem.
/Flibble
My point is that all of the halting problem proofs
are wrong when they require a Turing machine decider
H to report on the behavior of machine M on input i
because machine M is not in the domain of any Turing
machine decider. Only finite strings such as ⟨M⟩ the
Turing machine description of machine M are its
domain.
Definition of Turing Machine Ĥ
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
if Ĥ applied to ⟨Ĥ⟩ halts, and // incorrect requirement
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
if Ĥ applied to ⟨Ĥ⟩ does not halt. // incorrect requirement
(a) Ĥ copies its input ⟨Ĥ⟩
(b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
(c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
(d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
(e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ >>>>>> (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...
The fact that the correctly simulated input
specifies recursive simulation prevents the
simulated ⟨Ĥ⟩ from ever reaching its simulated
final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.
This is not contradicted by the fact that
Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
the domain of every Turing machine computed function.
In the atypical case where the behavior of the simulation
of an input to a potential halt decider disagrees with the
behavior of the direct execution of the underlying machine
(because this input calls this same simulating decider) it
is the behavior of the input that rules because deciders
compute the mapping for their inputs.
Nope, just more of your lies.
The behavior of an input to a halt decider is DEFINED in all cases
to be the behavior of the machine the input represents,
Yet I have conclusively proven otherwise and
you are too stupid to understand the proof.
No, because you proof needs to call different inputs the same or
partial simulaiton to be correct.
When HHH(DDD) simulates DDD it also simulates itself
simulating DDD because DDD calls HHH(DDD).
When HHH1(DDD) simulates DDD DOES NOT simulate itself
simulating DDD because DDD DOES NOT CALL HHH1(DDD).
For three fucking years everyone here pretended that
they could NOT fucking see that.
On 7/26/25 10:43 PM, olcott wrote:>>
When HHH(DDD) simulates DDD it also simulates itself
simulating DDD because DDD calls HHH(DDD).
But can only do that if HHH is part of its input, or it is not
simulating its input.
And, it FAILS at simulating itself, as it concludes that HHH(DDD) will
never return, when it does.
On 7/27/2025 6:11 AM, Richard Damon wrote:
On 7/26/25 10:43 PM, olcott wrote:>>
When HHH(DDD) simulates DDD it also simulates itself
simulating DDD because DDD calls HHH(DDD).
But can only do that if HHH is part of its input, or it is not
simulating its input.
And, it FAILS at simulating itself, as it concludes that HHH(DDD) will
never return, when it does.
This ChatGPT analysis of its input below
correctly derives both of our views. I did
not bias this analysis by telling ChatGPT
what I expect to see.
typedef void (*ptr)();
int HHH(ptr P);
void DDD()
{
HHH(DDD);
return;
}
int main()
{
HHH(DDD);
DDD();
}
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) It detects a non-terminating behavior pattern then it aborts its simulation and returns 0,
(b) Its simulated input reaches its simulated "return" statement then it returns 1.
https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c
On 7/27/25 9:50 AM, olcott wrote:
On 7/27/2025 6:11 AM, Richard Damon wrote:
On 7/26/25 10:43 PM, olcott wrote:>>
When HHH(DDD) simulates DDD it also simulates itself
simulating DDD because DDD calls HHH(DDD).
But can only do that if HHH is part of its input, or it is not
simulating its input.
And, it FAILS at simulating itself, as it concludes that HHH(DDD)
will never return, when it does.
This ChatGPT analysis of its input below
correctly derives both of our views. I did
not bias this analysis by telling ChatGPT
what I expect to see.
typedef void (*ptr)();
int HHH(ptr P);
void DDD()
{
HHH(DDD);
return;
}
int main()
{
HHH(DDD);
DDD();
}
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) It detects a non-terminating behavior pattern then it aborts its
simulation and returns 0,
(b) Its simulated input reaches its simulated "return" statement then
it returns 1.
https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c
Just proves that you have contaminated the learning with false idea
about programs.
On 7/27/2025 2:58 PM, Richard Damon wrote:
On 7/27/25 9:50 AM, olcott wrote:
On 7/27/2025 6:11 AM, Richard Damon wrote:
On 7/26/25 10:43 PM, olcott wrote:>>
When HHH(DDD) simulates DDD it also simulates itself
simulating DDD because DDD calls HHH(DDD).
But can only do that if HHH is part of its input, or it is not
simulating its input.
And, it FAILS at simulating itself, as it concludes that HHH(DDD)
will never return, when it does.
This ChatGPT analysis of its input below
correctly derives both of our views. I did
not bias this analysis by telling ChatGPT
what I expect to see.
typedef void (*ptr)();
int HHH(ptr P);
void DDD()
{
HHH(DDD);
return;
}
int main()
{
HHH(DDD);
DDD();
}
Simulating Termination Analyzer HHH correctly simulates its input until: >>> (a) It detects a non-terminating behavior pattern then it aborts its
simulation and returns 0,
(b) Its simulated input reaches its simulated "return" statement then
it returns 1.
https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c
Just proves that you have contaminated the learning with false idea
about programs.
I made sure that ChatGPT isolates this conversation
from everything else that I ever said. Besides telling
ChatGPT about the possibility of a simulating termination
analyzer (that I have proved does work on some inputs)
it figured out all the rest on its own without any
prompting from me.
On 7/27/25 4:28 PM, olcott wrote:
On 7/27/2025 2:58 PM, Richard Damon wrote:
On 7/27/25 9:50 AM, olcott wrote:
On 7/27/2025 6:11 AM, Richard Damon wrote:
On 7/26/25 10:43 PM, olcott wrote:>>
When HHH(DDD) simulates DDD it also simulates itself
simulating DDD because DDD calls HHH(DDD).
But can only do that if HHH is part of its input, or it is not
simulating its input.
And, it FAILS at simulating itself, as it concludes that HHH(DDD)
will never return, when it does.
This ChatGPT analysis of its input below
correctly derives both of our views. I did
not bias this analysis by telling ChatGPT
what I expect to see.
typedef void (*ptr)();
int HHH(ptr P);
void DDD()
{
HHH(DDD);
return;
}
int main()
{
HHH(DDD);
DDD();
}
Simulating Termination Analyzer HHH correctly simulates its input
until:
(a) It detects a non-terminating behavior pattern then it aborts its
simulation and returns 0,
(b) Its simulated input reaches its simulated "return" statement
then it returns 1.
https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c
Just proves that you have contaminated the learning with false idea
about programs.
I made sure that ChatGPT isolates this conversation
from everything else that I ever said. Besides telling
ChatGPT about the possibility of a simulating termination
analyzer (that I have proved does work on some inputs)
it figured out all the rest on its own without any
prompting from me.
You CAN'T totally isolate it. You can tell it to not use what you have
told it previously (which you did not do),
but anything said to the AI,
has a chance of being recorded and used for future training.
Just think, you might be the one responsible for providing the lies that future AIs have decided to accept ruining the chance of some future breakthrough.
On 7/27/2025 4:31 PM, Richard Damon wrote:
On 7/27/25 4:28 PM, olcott wrote:
On 7/27/2025 2:58 PM, Richard Damon wrote:
On 7/27/25 9:50 AM, olcott wrote:
On 7/27/2025 6:11 AM, Richard Damon wrote:
On 7/26/25 10:43 PM, olcott wrote:>>
When HHH(DDD) simulates DDD it also simulates itself
simulating DDD because DDD calls HHH(DDD).
But can only do that if HHH is part of its input, or it is not
simulating its input.
And, it FAILS at simulating itself, as it concludes that HHH(DDD) >>>>>> will never return, when it does.
This ChatGPT analysis of its input below
correctly derives both of our views. I did
not bias this analysis by telling ChatGPT
what I expect to see.
typedef void (*ptr)();
int HHH(ptr P);
void DDD()
{
HHH(DDD);
return;
}
int main()
{
HHH(DDD);
DDD();
}
Simulating Termination Analyzer HHH correctly simulates its input
until:
(a) It detects a non-terminating behavior pattern then it aborts
its simulation and returns 0,
(b) Its simulated input reaches its simulated "return" statement
then it returns 1.
https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c
Just proves that you have contaminated the learning with false idea
about programs.
I made sure that ChatGPT isolates this conversation
from everything else that I ever said. Besides telling
ChatGPT about the possibility of a simulating termination
analyzer (that I have proved does work on some inputs)
it figured out all the rest on its own without any
prompting from me.
You CAN'T totally isolate it. You can tell it to not use what you have
told it previously (which you did not do),
ChatGPT remember prior conversations
is turned off
My Account
Settings
Personalization
Memory
Reference saved memories
This is important because I need to know the
minimum basis that it needs to understand what
I said so that I can know that I have no gaps
in my reasoning.
but anything said to the AI, has a chance of being recorded and used
for future training.
During periodic updates.
Just think, you might be the one responsible for providing the lies
that future AIs have decided to accept ruining the chance of some
future breakthrough.
The above input that I provided has zero falsehoods.
ChatGPT figured out all of the reasoning from that
basis.
On 7/27/25 5:46 PM, olcott wrote:
On 7/27/2025 4:31 PM, Richard Damon wrote:
On 7/27/25 4:28 PM, olcott wrote:
On 7/27/2025 2:58 PM, Richard Damon wrote:
On 7/27/25 9:50 AM, olcott wrote:
On 7/27/2025 6:11 AM, Richard Damon wrote:
On 7/26/25 10:43 PM, olcott wrote:>>
When HHH(DDD) simulates DDD it also simulates itself
simulating DDD because DDD calls HHH(DDD).
But can only do that if HHH is part of its input, or it is not
simulating its input.
And, it FAILS at simulating itself, as it concludes that HHH(DDD) >>>>>>> will never return, when it does.
This ChatGPT analysis of its input below
correctly derives both of our views. I did
not bias this analysis by telling ChatGPT
what I expect to see.
typedef void (*ptr)();
int HHH(ptr P);
void DDD()
{
HHH(DDD);
return;
}
int main()
{
HHH(DDD);
DDD();
}
Simulating Termination Analyzer HHH correctly simulates its input >>>>>> until:
(a) It detects a non-terminating behavior pattern then it aborts
its simulation and returns 0,
(b) Its simulated input reaches its simulated "return" statement
then it returns 1.
https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c
Just proves that you have contaminated the learning with false idea >>>>> about programs.
I made sure that ChatGPT isolates this conversation
from everything else that I ever said. Besides telling
ChatGPT about the possibility of a simulating termination
analyzer (that I have proved does work on some inputs)
it figured out all the rest on its own without any
prompting from me.
You CAN'T totally isolate it. You can tell it to not use what you
have told it previously (which you did not do),
ChatGPT remember prior conversations
is turned off
My Account
Settings
Personalization
Memory
Reference saved memories
This is important because I need to know the
minimum basis that it needs to understand what
I said so that I can know that I have no gaps
in my reasoning.
But that setting isn't perfect.
but anything said to the AI, has a chance of being recorded and used
for future training.
During periodic updates.
And you have been posting your lies on usenet, which is a source of training, for awhile.
Just think, you might be the one responsible for providing the lies
that future AIs have decided to accept ruining the chance of some
future breakthrough.
The above input that I provided has zero falsehoods.
ChatGPT figured out all of the reasoning from that
basis.
But. not full definitions, like the fact that a given program on a given input will always do the same thing.
On 7/27/2025 7:07 PM, Richard Damon wrote:
On 7/27/25 5:46 PM, olcott wrote:
On 7/27/2025 4:31 PM, Richard Damon wrote:
On 7/27/25 4:28 PM, olcott wrote:
On 7/27/2025 2:58 PM, Richard Damon wrote:
On 7/27/25 9:50 AM, olcott wrote:
On 7/27/2025 6:11 AM, Richard Damon wrote:
On 7/26/25 10:43 PM, olcott wrote:>>
When HHH(DDD) simulates DDD it also simulates itself
simulating DDD because DDD calls HHH(DDD).
But can only do that if HHH is part of its input, or it is not >>>>>>>> simulating its input.
And, it FAILS at simulating itself, as it concludes that
HHH(DDD) will never return, when it does.
This ChatGPT analysis of its input below
correctly derives both of our views. I did
not bias this analysis by telling ChatGPT
what I expect to see.
typedef void (*ptr)();
int HHH(ptr P);
void DDD()
{
HHH(DDD);
return;
}
int main()
{
HHH(DDD);
DDD();
}
Simulating Termination Analyzer HHH correctly simulates its input >>>>>>> until:
(a) It detects a non-terminating behavior pattern then it aborts >>>>>>> its simulation and returns 0,
(b) Its simulated input reaches its simulated "return" statement >>>>>>> then it returns 1.
https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c
Just proves that you have contaminated the learning with false
idea about programs.
I made sure that ChatGPT isolates this conversation
from everything else that I ever said. Besides telling
ChatGPT about the possibility of a simulating termination
analyzer (that I have proved does work on some inputs)
it figured out all the rest on its own without any
prompting from me.
You CAN'T totally isolate it. You can tell it to not use what you
have told it previously (which you did not do),
ChatGPT remember prior conversations
is turned off
My Account
Settings
Personalization
Memory
Reference saved memories
This is important because I need to know the
minimum basis that it needs to understand what
I said so that I can know that I have no gaps
in my reasoning.
But that setting isn't perfect.
but anything said to the AI, has a chance of being recorded and used
for future training.
During periodic updates.
And you have been posting your lies on usenet, which is a source of
training, for awhile.
Just think, you might be the one responsible for providing the lies
that future AIs have decided to accept ruining the chance of some
future breakthrough.
The above input that I provided has zero falsehoods.
ChatGPT figured out all of the reasoning from that
basis.
But. not full definitions, like the fact that a given program on a
given input will always do the same thing.
When DDD is emulated by HHH it must emulate
DDD calling itself in recursive emulation.
When DDD is emulated by HHH1 it need not emulate
itself at all.
Everyone here has pretended to be to fucking stupid
to see that for three fucking years thus providing
sufficient evidence that they are all damned liars.
On 7/27/2025 7:07 PM, Richard Damon wrote:
On 7/27/25 5:46 PM, olcott wrote:
On 7/27/2025 4:31 PM, Richard Damon wrote:
On 7/27/25 4:28 PM, olcott wrote:
On 7/27/2025 2:58 PM, Richard Damon wrote:
On 7/27/25 9:50 AM, olcott wrote:
On 7/27/2025 6:11 AM, Richard Damon wrote:
On 7/26/25 10:43 PM, olcott wrote:>>
When HHH(DDD) simulates DDD it also simulates itself simulating DDD because DDD calls HHH(DDD).
But can only do that if HHH is part of its input, or it is not simulating its input.
And, it FAILS at simulating itself, as it concludes that HHH(DDD)
will never return, when it does.
This ChatGPT analysis of its input below
correctly derives both of our views. I did
not bias this analysis by telling ChatGPT
what I expect to see.
typedef void (*ptr)();
int HHH(ptr P);
void DDD()
{
HHH(DDD);
return;
}
int main()
{
HHH(DDD);
DDD();
}
Simulating Termination Analyzer HHH correctly simulates its input
until:
(a) It detects a non-terminating behavior pattern then it aborts its simulation and returns 0,
(b) Its simulated input reaches its simulated "return" statement then it returns 1.
https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c
Just proves that you have contaminated the learning with false idea
about programs.
I made sure that ChatGPT isolates this conversation
from everything else that I ever said. Besides telling
ChatGPT about the possibility of a simulating termination
analyzer (that I have proved does work on some inputs)
it figured out all the rest on its own without any
prompting from me.
You CAN'T totally isolate it. You can tell it to not use what you
have told it previously (which you did not do),
ChatGPT remember prior conversations
is turned off
My Account
Settings
Personalization
Memory
Reference saved memories
This is important because I need to know the
minimum basis that it needs to understand what
I said so that I can know that I have no gaps
in my reasoning.
But that setting isn't perfect.
but anything said to the AI, has a chance of being recorded and used for future training.
During periodic updates.
And you have been posting your lies on usenet, which is a source of training, for awhile.
Just think, you might be the one responsible for providing the lies that future AIs have decided to accept ruining the chance of some future breakthrough.
The above input that I provided has zero falsehoods.
ChatGPT figured out all of the reasoning from that
basis.
But. not full definitions, like the fact that a given program on a given input will always do the same thing.
When DDD is emulated by HHH it must emulateolcott said: "You have not used any reasoning you only provided the
DDD calling itself in recursive emulation.
When DDD is emulated by HHH1 it need not emulate
itself at all.
Everyone here has pretended to be to fucking stupid
to see that for three fucking years thus providing
sufficient evidence that they are all damned liars.
On 7/27/25 8:20 PM, olcott wrote:By itself I mean the exact same machine code bytes
On 7/27/2025 7:07 PM, Richard Damon wrote:
On 7/27/25 5:46 PM, olcott wrote:
On 7/27/2025 4:31 PM, Richard Damon wrote:
On 7/27/25 4:28 PM, olcott wrote:
On 7/27/2025 2:58 PM, Richard Damon wrote:
On 7/27/25 9:50 AM, olcott wrote:
On 7/27/2025 6:11 AM, Richard Damon wrote:
On 7/26/25 10:43 PM, olcott wrote:>>
When HHH(DDD) simulates DDD it also simulates itself
simulating DDD because DDD calls HHH(DDD).
But can only do that if HHH is part of its input, or it is not >>>>>>>>> simulating its input.
And, it FAILS at simulating itself, as it concludes that
HHH(DDD) will never return, when it does.
This ChatGPT analysis of its input below
correctly derives both of our views. I did
not bias this analysis by telling ChatGPT
what I expect to see.
typedef void (*ptr)();
int HHH(ptr P);
void DDD()
{
HHH(DDD);
return;
}
int main()
{
HHH(DDD);
DDD();
}
Simulating Termination Analyzer HHH correctly simulates its
input until:
(a) It detects a non-terminating behavior pattern then it aborts >>>>>>>> its simulation and returns 0,
(b) Its simulated input reaches its simulated "return" statement >>>>>>>> then it returns 1.
https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c >>>>>>>>
Just proves that you have contaminated the learning with false
idea about programs.
I made sure that ChatGPT isolates this conversation
from everything else that I ever said. Besides telling
ChatGPT about the possibility of a simulating termination
analyzer (that I have proved does work on some inputs)
it figured out all the rest on its own without any
prompting from me.
You CAN'T totally isolate it. You can tell it to not use what you
have told it previously (which you did not do),
ChatGPT remember prior conversations
is turned off
My Account
Settings
Personalization
Memory
Reference saved memories
This is important because I need to know the
minimum basis that it needs to understand what
I said so that I can know that I have no gaps
in my reasoning.
But that setting isn't perfect.
but anything said to the AI, has a chance of being recorded and
used for future training.
During periodic updates.
And you have been posting your lies on usenet, which is a source of
training, for awhile.
Just think, you might be the one responsible for providing the lies >>>>> that future AIs have decided to accept ruining the chance of some
future breakthrough.
The above input that I provided has zero falsehoods.
ChatGPT figured out all of the reasoning from that
basis.
But. not full definitions, like the fact that a given program on a
given input will always do the same thing.
When DDD is emulated by HHH it must emulate
DDD calling itself in recursive emulation.
When DDD is emulated by HHH1 it need not emulate
itself at all.
But "itself" doesn't matter to x86 instructions,
On 7/26/2025 6:35 PM, Richard Damon wrote:
On 7/26/25 7:08 PM, olcott wrote:
On 7/26/2025 5:49 PM, olcott wrote:
On 7/26/2025 2:58 PM, olcott wrote:
On 7/26/2025 2:52 PM, Mr Flibble wrote:
On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:
On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
In comp.theory olcott <polcott333@gmail.com> wrote:It only seems to you that I lack understanding because you are so >>>>>>> sure
The error of all of the halting problem proofs is that they >>>>>>>>> require a
Turing machine halt decider to report on the behavior of a
directly
executed Turing machine.
It is common knowledge that no Turing machine decider can take >>>>>>>>> another
directly executing Turing machine as an input, thus the above >>>>>>>>> requirement is not precisely correct.
When we correct the error of this incorrect requirement it
becomes a
Turing machine decider indirectly reports on the behavior of a >>>>>>>>> directly executing Turing machine through the proxy of a finite >>>>>>>>> string
description of this machine.
Now I have proven and corrected the error of all of the halting >>>>>>>>> problem proofs.
No you haven't, the subject matter is too far beyond your
intellectual
capacity.
that I must be wrong that you make sure to totally ignore the subtle >>>>>>> nuances of meaning that proves I am correct.
No Turing machine based (at least partial) halt decider can possibly >>>>>>> *directly* report on the behavior of any directly executing Turing >>>>>>> machine. The best that any of them can possibly do is indirectly >>>>>>> report
on this behavior through the proxy of a finite string machine
description.
Partial decidability is not a hard problem.
/Flibble
My point is that all of the halting problem proofs
are wrong when they require a Turing machine decider
H to report on the behavior of machine M on input i
because machine M is not in the domain of any Turing
machine decider. Only finite strings such as ⟨M⟩ the
Turing machine description of machine M are its
domain.
Definition of Turing Machine Ĥ
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
if Ĥ applied to ⟨Ĥ⟩ halts, and // incorrect requirement
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
if Ĥ applied to ⟨Ĥ⟩ does not halt. // incorrect requirement
(a) Ĥ copies its input ⟨Ĥ⟩
(b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
(c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
(d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
(e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
(f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...
The fact that the correctly simulated input
specifies recursive simulation prevents the
simulated ⟨Ĥ⟩ from ever reaching its simulated
final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.
This is not contradicted by the fact that
Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
the domain of every Turing machine computed function.
In the atypical case where the behavior of the simulation
of an input to a potential halt decider disagrees with the
behavior of the direct execution of the underlying machine
(because this input calls this same simulating decider) it
is the behavior of the input that rules because deciders
compute the mapping for their inputs.
Nope, just more of your lies.
The behavior of an input to a halt decider is DEFINED in all cases to
be the behavior of the machine the input represents,
Yet I have conclusively proven otherwise and
you are too stupid to understand the proof.
You are so stupid that you think you can get
away with disagreeing with the x86 language.
_DDD()
[00002192] 55 push ebp
[00002193] 8bec mov ebp,esp
[00002195] 6892210000 push 00002192 // push DDD
[0000219a] e833f4ffff call 000015d2 // call HHH
[0000219f] 83c404 add esp,+04
[000021a2] 5d pop ebp
[000021a3] c3 ret
Size in bytes:(0018) [000021a3]
DDD simulated by HHH according to the rules of the
x86 language does not fucking halt you fucking moron.
If any definition says otherwise then this definition
is fucked up.
On 7/26/2025 5:49 PM, olcott wrote:
On 7/26/2025 2:58 PM, olcott wrote:
On 7/26/2025 2:52 PM, Mr Flibble wrote:
On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:
On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
In comp.theory olcott <polcott333@gmail.com> wrote:It only seems to you that I lack understanding because you are so sure >>>>> that I must be wrong that you make sure to totally ignore the subtle >>>>> nuances of meaning that proves I am correct.
The error of all of the halting problem proofs is that they
require a
Turing machine halt decider to report on the behavior of a directly >>>>>>> executed Turing machine.
It is common knowledge that no Turing machine decider can take
another
directly executing Turing machine as an input, thus the above
requirement is not precisely correct.
When we correct the error of this incorrect requirement it becomes a >>>>>>> Turing machine decider indirectly reports on the behavior of a
directly executing Turing machine through the proxy of a finite >>>>>>> string
description of this machine.
Now I have proven and corrected the error of all of the halting
problem proofs.
No you haven't, the subject matter is too far beyond your
intellectual
capacity.
No Turing machine based (at least partial) halt decider can possibly >>>>> *directly* report on the behavior of any directly executing Turing
machine. The best that any of them can possibly do is indirectly
report
on this behavior through the proxy of a finite string machine
description.
Partial decidability is not a hard problem.
/Flibble
My point is that all of the halting problem proofs
are wrong when they require a Turing machine decider
H to report on the behavior of machine M on input i
because machine M is not in the domain of any Turing
machine decider. Only finite strings such as ⟨M⟩ the
Turing machine description of machine M are its
domain.
Definition of Turing Machine Ĥ
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
if Ĥ applied to ⟨Ĥ⟩ halts, and // incorrect requirement
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
if Ĥ applied to ⟨Ĥ⟩ does not halt. // incorrect requirement >>
(a) Ĥ copies its input ⟨Ĥ⟩
(b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
(c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
(d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
(e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
(f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...
The fact that the correctly simulated input
specifies recursive simulation prevents the
simulated ⟨Ĥ⟩ from ever reaching its simulated
final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.
This is not contradicted by the fact that
Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
the domain of every Turing machine computed function.
In the atypical case where the behavior of the simulation
of an input to a potential halt decider disagrees with the
behavior of the direct execution of the underlying machine
(because this input calls this same simulating decider) it
is the behavior of the input that rules because deciders
compute the mapping *FROM* their inputs.
On 7/27/2025 9:48 PM, Richard Damon wrote:
On 7/27/25 8:20 PM, olcott wrote:By itself I mean the exact same machine code bytes
On 7/27/2025 7:07 PM, Richard Damon wrote:
On 7/27/25 5:46 PM, olcott wrote:
On 7/27/2025 4:31 PM, Richard Damon wrote:
On 7/27/25 4:28 PM, olcott wrote:
On 7/27/2025 2:58 PM, Richard Damon wrote:
On 7/27/25 9:50 AM, olcott wrote:
On 7/27/2025 6:11 AM, Richard Damon wrote:
On 7/26/25 10:43 PM, olcott wrote:>>
When HHH(DDD) simulates DDD it also simulates itself
simulating DDD because DDD calls HHH(DDD).
But can only do that if HHH is part of its input, or it is not >>>>>>>>>> simulating its input.
And, it FAILS at simulating itself, as it concludes that
HHH(DDD) will never return, when it does.
This ChatGPT analysis of its input below
correctly derives both of our views. I did
not bias this analysis by telling ChatGPT
what I expect to see.
typedef void (*ptr)();
int HHH(ptr P);
void DDD()
{
HHH(DDD);
return;
}
int main()
{
HHH(DDD);
DDD();
}
Simulating Termination Analyzer HHH correctly simulates its >>>>>>>>> input until:
(a) It detects a non-terminating behavior pattern then it
aborts its simulation and returns 0,
(b) Its simulated input reaches its simulated "return"
statement then it returns 1.
https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c >>>>>>>>>
Just proves that you have contaminated the learning with false >>>>>>>> idea about programs.
I made sure that ChatGPT isolates this conversation
from everything else that I ever said. Besides telling
ChatGPT about the possibility of a simulating termination
analyzer (that I have proved does work on some inputs)
it figured out all the rest on its own without any
prompting from me.
You CAN'T totally isolate it. You can tell it to not use what you >>>>>> have told it previously (which you did not do),
ChatGPT remember prior conversations
is turned off
My Account
Settings
Personalization
Memory
Reference saved memories
This is important because I need to know the
minimum basis that it needs to understand what
I said so that I can know that I have no gaps
in my reasoning.
But that setting isn't perfect.
but anything said to the AI, has a chance of being recorded and
used for future training.
During periodic updates.
And you have been posting your lies on usenet, which is a source of
training, for awhile.
Just think, you might be the one responsible for providing the
lies that future AIs have decided to accept ruining the chance of >>>>>> some future breakthrough.
The above input that I provided has zero falsehoods.
ChatGPT figured out all of the reasoning from that
basis.
But. not full definitions, like the fact that a given program on a
given input will always do the same thing.
When DDD is emulated by HHH it must emulate
DDD calling itself in recursive emulation.
When DDD is emulated by HHH1 it need not emulate
itself at all.
But "itself" doesn't matter to x86 instructions,
at the exact same machine address.
On 7/27/25 10:58 PM, olcott wrote:
On 7/27/2025 9:48 PM, Richard Damon wrote:
On 7/27/25 8:20 PM, olcott wrote:By itself I mean the exact same machine code bytes
On 7/27/2025 7:07 PM, Richard Damon wrote:
On 7/27/25 5:46 PM, olcott wrote:
On 7/27/2025 4:31 PM, Richard Damon wrote:
On 7/27/25 4:28 PM, olcott wrote:
On 7/27/2025 2:58 PM, Richard Damon wrote:
On 7/27/25 9:50 AM, olcott wrote:
On 7/27/2025 6:11 AM, Richard Damon wrote:
On 7/26/25 10:43 PM, olcott wrote:>>
When HHH(DDD) simulates DDD it also simulates itself
simulating DDD because DDD calls HHH(DDD).
But can only do that if HHH is part of its input, or it is >>>>>>>>>>> not simulating its input.
And, it FAILS at simulating itself, as it concludes that >>>>>>>>>>> HHH(DDD) will never return, when it does.
This ChatGPT analysis of its input below
correctly derives both of our views. I did
not bias this analysis by telling ChatGPT
what I expect to see.
typedef void (*ptr)();
int HHH(ptr P);
void DDD()
{
HHH(DDD);
return;
}
int main()
{
HHH(DDD);
DDD();
}
Simulating Termination Analyzer HHH correctly simulates its >>>>>>>>>> input until:
(a) It detects a non-terminating behavior pattern then it >>>>>>>>>> aborts its simulation and returns 0,
(b) Its simulated input reaches its simulated "return"
statement then it returns 1.
https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c >>>>>>>>>>
Just proves that you have contaminated the learning with false >>>>>>>>> idea about programs.
I made sure that ChatGPT isolates this conversation
from everything else that I ever said. Besides telling
ChatGPT about the possibility of a simulating termination
analyzer (that I have proved does work on some inputs)
it figured out all the rest on its own without any
prompting from me.
You CAN'T totally isolate it. You can tell it to not use what you >>>>>>> have told it previously (which you did not do),
ChatGPT remember prior conversations
is turned off
My Account
Settings
Personalization
Memory
Reference saved memories
This is important because I need to know the
minimum basis that it needs to understand what
I said so that I can know that I have no gaps
in my reasoning.
But that setting isn't perfect.
but anything said to the AI, has a chance of being recorded and >>>>>>> used for future training.
During periodic updates.
And you have been posting your lies on usenet, which is a source of >>>>> training, for awhile.
Just think, you might be the one responsible for providing the
lies that future AIs have decided to accept ruining the chance of >>>>>>> some future breakthrough.
The above input that I provided has zero falsehoods.
ChatGPT figured out all of the reasoning from that
basis.
But. not full definitions, like the fact that a given program on a
given input will always do the same thing.
When DDD is emulated by HHH it must emulate
DDD calling itself in recursive emulation.
When DDD is emulated by HHH1 it need not emulate
itself at all.
But "itself" doesn't matter to x86 instructions,
at the exact same machine address.
Which doesn't affect the behavior of those bytes.
Op 27.jul.2025 om 01:43 schreef olcott:
On 7/26/2025 6:35 PM, Richard Damon wrote:
The behavior of an input to a halt decider is DEFINED in all cases to
be the behavior of the machine the input represents,
Yet I have conclusively proven otherwise and
you are too stupid to understand the proof.
That was not a proof, but an assumption with a huge mistake.
Op 27.jul.2025 om 01:28 schreef olcott:It never was the actual input that specifies non-halting
On 7/26/2025 5:49 PM, olcott wrote:
On 7/26/2025 2:58 PM, olcott wrote:
On 7/26/2025 2:52 PM, Mr Flibble wrote:
On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:
On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
In comp.theory olcott <polcott333@gmail.com> wrote:It only seems to you that I lack understanding because you are so >>>>>> sure
The error of all of the halting problem proofs is that they
require a
Turing machine halt decider to report on the behavior of a directly >>>>>>>> executed Turing machine.
It is common knowledge that no Turing machine decider can take >>>>>>>> another
directly executing Turing machine as an input, thus the above
requirement is not precisely correct.
When we correct the error of this incorrect requirement it
becomes a
Turing machine decider indirectly reports on the behavior of a >>>>>>>> directly executing Turing machine through the proxy of a finite >>>>>>>> string
description of this machine.
Now I have proven and corrected the error of all of the halting >>>>>>>> problem proofs.
No you haven't, the subject matter is too far beyond your
intellectual
capacity.
that I must be wrong that you make sure to totally ignore the subtle >>>>>> nuances of meaning that proves I am correct.
No Turing machine based (at least partial) halt decider can possibly >>>>>> *directly* report on the behavior of any directly executing Turing >>>>>> machine. The best that any of them can possibly do is indirectly >>>>>> report
on this behavior through the proxy of a finite string machine
description.
Partial decidability is not a hard problem.
/Flibble
My point is that all of the halting problem proofs
are wrong when they require a Turing machine decider
H to report on the behavior of machine M on input i
because machine M is not in the domain of any Turing
machine decider. Only finite strings such as ⟨M⟩ the
Turing machine description of machine M are its
domain.
Definition of Turing Machine Ĥ
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
if Ĥ applied to ⟨Ĥ⟩ halts, and // incorrect requirement
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
if Ĥ applied to ⟨Ĥ⟩ does not halt. // incorrect requirement
(a) Ĥ copies its input ⟨Ĥ⟩
(b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
(c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
(d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
(e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
(f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...
The fact that the correctly simulated input
specifies recursive simulation prevents the
simulated ⟨Ĥ⟩ from ever reaching its simulated
final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.
This is not contradicted by the fact that
Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
the domain of every Turing machine computed function.
In the atypical case where the behavior of the simulation
of an input to a potential halt decider disagrees with the
behavior of the direct execution of the underlying machine
(because this input calls this same simulating decider) it
is the behavior of the input that rules because deciders
compute the mapping *FROM* their inputs.
But the input specifies halting behaviour,
On 7/28/2025 6:38 AM, Richard Damon wrote:
On 7/27/25 10:58 PM, olcott wrote:
On 7/27/2025 9:48 PM, Richard Damon wrote:
On 7/27/25 8:20 PM, olcott wrote:By itself I mean the exact same machine code bytes
On 7/27/2025 7:07 PM, Richard Damon wrote:
On 7/27/25 5:46 PM, olcott wrote:
On 7/27/2025 4:31 PM, Richard Damon wrote:
On 7/27/25 4:28 PM, olcott wrote:
On 7/27/2025 2:58 PM, Richard Damon wrote:
On 7/27/25 9:50 AM, olcott wrote:
On 7/27/2025 6:11 AM, Richard Damon wrote:
On 7/26/25 10:43 PM, olcott wrote:>>
When HHH(DDD) simulates DDD it also simulates itself >>>>>>>>>>>>> simulating DDD because DDD calls HHH(DDD).
But can only do that if HHH is part of its input, or it is >>>>>>>>>>>> not simulating its input.
And, it FAILS at simulating itself, as it concludes that >>>>>>>>>>>> HHH(DDD) will never return, when it does.
This ChatGPT analysis of its input below
correctly derives both of our views. I did
not bias this analysis by telling ChatGPT
what I expect to see.
typedef void (*ptr)();
int HHH(ptr P);
void DDD()
{
HHH(DDD);
return;
}
int main()
{
HHH(DDD);
DDD();
}
Simulating Termination Analyzer HHH correctly simulates its >>>>>>>>>>> input until:
(a) It detects a non-terminating behavior pattern then it >>>>>>>>>>> aborts its simulation and returns 0,
(b) Its simulated input reaches its simulated "return"
statement then it returns 1.
https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c >>>>>>>>>>>
Just proves that you have contaminated the learning with false >>>>>>>>>> idea about programs.
I made sure that ChatGPT isolates this conversation
from everything else that I ever said. Besides telling
ChatGPT about the possibility of a simulating termination
analyzer (that I have proved does work on some inputs)
it figured out all the rest on its own without any
prompting from me.
You CAN'T totally isolate it. You can tell it to not use what >>>>>>>> you have told it previously (which you did not do),
ChatGPT remember prior conversations
is turned off
My Account
Settings
Personalization
Memory
Reference saved memories
This is important because I need to know the
minimum basis that it needs to understand what
I said so that I can know that I have no gaps
in my reasoning.
But that setting isn't perfect.
but anything said to the AI, has a chance of being recorded and >>>>>>>> used for future training.
During periodic updates.
And you have been posting your lies on usenet, which is a source
of training, for awhile.
Just think, you might be the one responsible for providing the >>>>>>>> lies that future AIs have decided to accept ruining the chance >>>>>>>> of some future breakthrough.
The above input that I provided has zero falsehoods.
ChatGPT figured out all of the reasoning from that
basis.
But. not full definitions, like the fact that a given program on a >>>>>> given input will always do the same thing.
When DDD is emulated by HHH it must emulate
DDD calling itself in recursive emulation.
When DDD is emulated by HHH1 it need not emulate
itself at all.
But "itself" doesn't matter to x86 instructions,
at the exact same machine address.
Which doesn't affect the behavior of those bytes.
void DDD()
{
HHH(DDD);
return;
}
That you are too stupid to understand that DDD simulated
by HHH does call HHH in recursive emulation even after
I have provided fully operational code of DDD calling
HHH(DDD) in recursive emulation *IS NOT A REBUTTAL*
https://github.com/plolcott/x86utm/blob/master/Halt7.c
On 7/28/25 8:34 AM, olcott wrote:
void DDD()
{
HHH(DDD);
return;
}
That you are too stupid to understand that DDD simulated
by HHH does call HHH in recursive emulation even after
I have provided fully operational code of DDD calling
HHH(DDD) in recursive emulation *IS NOT A REBUTTAL*
https://github.com/plolcott/x86utm/blob/master/Halt7.c
What you are too stupid to understand is that while the *PROGRAM* HHH,
which does the specific actions it is defined to, when it simulates the input that represents the *PROGRAM* DDD, which by definition includes
the code of the HHH that it is built on, that will not reach the final state.
On 7/28/25 7:20 PM, olcott wrote:
On 7/28/2025 5:57 PM, Richard Damon wrote:
On 7/28/25 9:54 AM, olcott wrote:
On 7/28/2025 8:21 AM, joes wrote:
Am Mon, 28 Jul 2025 07:11:11 -0500 schrieb olcott:
On 7/28/2025 2:30 AM, joes wrote:
Am Sun, 27 Jul 2025 21:58:05 -0500 schrieb olcott:
On 7/27/2025 9:48 PM, Richard Damon wrote:
On 7/27/25 8:20 PM, olcott wrote:
Yeah, so when you change HHH to abort later, you also change DDD. >>>>>> HHH is never changed.By itself I mean the exact same machine code bytes at the exact >>>>>>>> sameWhen DDD is emulated by HHH1 it need not emulate itself at all. >>>>>>>>> But "itself" doesn't matter to x86 instructions,
machine address.
It is changed in the hypothetical unaborted simulation. HHH is
reporting
on UTM(HHH', DDD) where HHH' calls UTM(DDD), and not on the halting >>>>> DDD,
and definitely not on HHH(DDD), itself.
All halt deciders are required to predict the behavior
of their input. HHH does correctly predict that DDD correctly
simulated by HHH cannot possibly reach its own simulated
"return" instruction final halt state.
How is it a "correct prediction" if it sees something different than
what that DDD does.
What DDD does is keep calling HHH(DDD) in recursive
simulation until HHH kills this whole process.
But the behavior of the program continues past that (something you don't seem to understand) and that behavior will also have its HHH terminate
the DDD it is simulating and return 0 to DDD and then Halt.
Your problem is you don't understand that the simulating HHH doesn't
define the behavior of DDD, it is the execution of DDD that defines what
a correct simulation of it is.
Remember, to have simulated that DDD, it must have include the code
of the HHH that it was based on, which is the HHH that made the
prediction, and thus returns 0, so DDD will halt.
We are not asking: Does DDD() halt.
That is (as it turns out) an incorrect question.
No, that is EXACTLY the question.
I guess you are just admitting that you whole world is based on LYING
about what things are supposed to be.
Turing machines cannot directly report on the behavior
of other Turing machines they can at best indirectly
report on the behavior of Turing machines through the
proxy of finite string machine descriptions such as ⟨M⟩.
Right, and HHH was given the equivalenet of (M) by being given the code
of *ALL* of DDD
I guess you don't understand that fact, even though you CLAIM the input
is the proper representation of DDD.
Thus the behavior specified by the input finite string
overrules and supersedes the behavior of the direct
execution.
No, it is DEFINED to be the behavior of the direct execution of the
program it represent.
On 7/28/2025 6:49 PM, Richard Damon wrote:
On 7/28/25 7:20 PM, olcott wrote:
On 7/28/2025 5:57 PM, Richard Damon wrote:
On 7/28/25 9:54 AM, olcott wrote:
On 7/28/2025 8:21 AM, joes wrote:
Am Mon, 28 Jul 2025 07:11:11 -0500 schrieb olcott:
On 7/28/2025 2:30 AM, joes wrote:
Am Sun, 27 Jul 2025 21:58:05 -0500 schrieb olcott:
On 7/27/2025 9:48 PM, Richard Damon wrote:
On 7/27/25 8:20 PM, olcott wrote:
Yeah, so when you change HHH to abort later, you also change DDD. >>>>>>> HHH is never changed.By itself I mean the exact same machine code bytes at the exact >>>>>>>>> sameWhen DDD is emulated by HHH1 it need not emulate itself at all. >>>>>>>>>> But "itself" doesn't matter to x86 instructions,
machine address.
It is changed in the hypothetical unaborted simulation. HHH is
reporting
on UTM(HHH', DDD) where HHH' calls UTM(DDD), and not on the
halting DDD,
and definitely not on HHH(DDD), itself.
All halt deciders are required to predict the behavior
of their input. HHH does correctly predict that DDD correctly
simulated by HHH cannot possibly reach its own simulated
"return" instruction final halt state.
How is it a "correct prediction" if it sees something different than
what that DDD does.
What DDD does is keep calling HHH(DDD) in recursive
simulation until HHH kills this whole process.
But the behavior of the program continues past that (something you
don't seem to understand) and that behavior will also have its HHH
terminate the DDD it is simulating and return 0 to DDD and then Halt.
Your problem is you don't understand that the simulating HHH doesn't
define the behavior of DDD, it is the execution of DDD that defines
what a correct simulation of it is.
Remember, to have simulated that DDD, it must have include the code
of the HHH that it was based on, which is the HHH that made the
prediction, and thus returns 0, so DDD will halt.
We are not asking: Does DDD() halt.
That is (as it turns out) an incorrect question.
No, that is EXACTLY the question.
I guess you are just admitting that you whole world is based on LYING
about what things are supposed to be.
Turing machines cannot directly report on the behavior
of other Turing machines they can at best indirectly
report on the behavior of Turing machines through the
proxy of finite string machine descriptions such as ⟨M⟩.
Right, and HHH was given the equivalenet of (M) by being given the
code of *ALL* of DDD
I guess you don't understand that fact, even though you CLAIM the
input is the proper representation of DDD.
Thus the behavior specified by the input finite string
overrules and supersedes the behavior of the direct
execution.
No, it is DEFINED to be the behavior of the direct execution of the
program it represent.
*That has always been the fatal flaw of all of the proofs*
We could equally define the area of a square circle
as its radius multiplied by the length of one of its sides.
It never has been that DDD simulated by HHH is incorrect
because it does not agree with what people expect to see.
It has always been that it is correct because it matches
the semantics that the code specifies.
DDD simulated by HHH specifies that DDD keeps calling
HHH in recursive simulation until HHH kills the whole
process of DDD.
On 7/28/2025 6:26 PM, Richard Damon wrote:
On 7/28/25 8:34 AM, olcott wrote:
void DDD()
{
HHH(DDD);
return;
}
That you are too stupid to understand that DDD simulated
by HHH does call HHH in recursive emulation even after
I have provided fully operational code of DDD calling
HHH(DDD) in recursive emulation *IS NOT A REBUTTAL*
https://github.com/plolcott/x86utm/blob/master/Halt7.c
What you are too stupid to understand is that while the *PROGRAM* HHH,
which does the specific actions it is defined to, when it simulates
the input that represents the *PROGRAM* DDD, which by definition
includes the code of the HHH that it is built on, that will not reach
the final state.
HHH correctly predicts that DDD correctly simulated
by HHH cannot possibly reach its simulated "return"
statement final halt state. This is because DDD does
call HHH(DDD) in recursive simulation.
*Within those exact words I am exactly correct*
Trying to change those *EXACT WORDS* to show that
I am incorrect *IS CHEATING*
After we have mutual agreement *ON THOSE EXACT WORDS*
thenn (then and only then) we can begin discussing
whether or not those words are relevant.
On 7/28/25 7:44 PM, olcott wrote:
On 7/28/2025 6:26 PM, Richard Damon wrote:
On 7/28/25 8:34 AM, olcott wrote:
void DDD()
{
HHH(DDD);
return;
}
That you are too stupid to understand that DDD simulated
by HHH does call HHH in recursive emulation even after
I have provided fully operational code of DDD calling
HHH(DDD) in recursive emulation *IS NOT A REBUTTAL*
https://github.com/plolcott/x86utm/blob/master/Halt7.c
What you are too stupid to understand is that while the *PROGRAM*
HHH, which does the specific actions it is defined to, when it
simulates the input that represents the *PROGRAM* DDD, which by
definition includes the code of the HHH that it is built on, that
will not reach the final state.
HHH correctly predicts that DDD correctly simulated
by HHH cannot possibly reach its simulated "return"
statement final halt state. This is because DDD does
call HHH(DDD) in recursive simulation.
Can't do that, as HHH doesn't correct simulate its input, since correct simulation requires being complete.
On 7/28/2025 9:00 PM, Richard Damon wrote:
On 7/28/25 7:44 PM, olcott wrote:
On 7/28/2025 6:26 PM, Richard Damon wrote:
On 7/28/25 8:34 AM, olcott wrote:
void DDD()
{
HHH(DDD);
return;
}
That you are too stupid to understand that DDD simulated
by HHH does call HHH in recursive emulation even after
I have provided fully operational code of DDD calling
HHH(DDD) in recursive emulation *IS NOT A REBUTTAL*
https://github.com/plolcott/x86utm/blob/master/Halt7.c
What you are too stupid to understand is that while the *PROGRAM*
HHH, which does the specific actions it is defined to, when it
simulates the input that represents the *PROGRAM* DDD, which by
definition includes the code of the HHH that it is built on, that
will not reach the final state.
HHH correctly predicts that DDD correctly simulated
by HHH cannot possibly reach its simulated "return"
statement final halt state. This is because DDD does
call HHH(DDD) in recursive simulation.
Can't do that, as HHH doesn't correct simulate its input, since
correct simulation requires being complete.
Never heard of mathematical induction?
On 7/28/25 10:32 PM, olcott wrote:
On 7/28/2025 9:00 PM, Richard Damon wrote:
On 7/28/25 7:44 PM, olcott wrote:
On 7/28/2025 6:26 PM, Richard Damon wrote:
On 7/28/25 8:34 AM, olcott wrote:
void DDD()
{
HHH(DDD);
return;
}
That you are too stupid to understand that DDD simulated
by HHH does call HHH in recursive emulation even after
I have provided fully operational code of DDD calling
HHH(DDD) in recursive emulation *IS NOT A REBUTTAL*
https://github.com/plolcott/x86utm/blob/master/Halt7.c
What you are too stupid to understand is that while the *PROGRAM*
HHH, which does the specific actions it is defined to, when it
simulates the input that represents the *PROGRAM* DDD, which by
definition includes the code of the HHH that it is built on, that
will not reach the final state.
HHH correctly predicts that DDD correctly simulated
by HHH cannot possibly reach its simulated "return"
statement final halt state. This is because DDD does
call HHH(DDD) in recursive simulation.
Can't do that, as HHH doesn't correct simulate its input, since
correct simulation requires being complete.
Never heard of mathematical induction?
You don't have a valid induction. The problem is every version of HHH
gets a different version of DDD, so you can't build the induction, as
the n and n+1 steps don't relate.
On 7/28/2025 4:13 AM, Fred. Zwarts wrote:
Op 27.jul.2025 om 01:28 schreef olcott:It never was the actual input that specifies non-halting
On 7/26/2025 5:49 PM, olcott wrote:
On 7/26/2025 2:58 PM, olcott wrote:
On 7/26/2025 2:52 PM, Mr Flibble wrote:
On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:
On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
In comp.theory olcott <polcott333@gmail.com> wrote:It only seems to you that I lack understanding because you are so >>>>>>> sure
The error of all of the halting problem proofs is that they >>>>>>>>> require a
Turing machine halt decider to report on the behavior of a
directly
executed Turing machine.
It is common knowledge that no Turing machine decider can take >>>>>>>>> another
directly executing Turing machine as an input, thus the above >>>>>>>>> requirement is not precisely correct.
When we correct the error of this incorrect requirement it
becomes a
Turing machine decider indirectly reports on the behavior of a >>>>>>>>> directly executing Turing machine through the proxy of a finite >>>>>>>>> string
description of this machine.
Now I have proven and corrected the error of all of the halting >>>>>>>>> problem proofs.
No you haven't, the subject matter is too far beyond your
intellectual
capacity.
that I must be wrong that you make sure to totally ignore the subtle >>>>>>> nuances of meaning that proves I am correct.
No Turing machine based (at least partial) halt decider can possibly >>>>>>> *directly* report on the behavior of any directly executing Turing >>>>>>> machine. The best that any of them can possibly do is indirectly >>>>>>> report
on this behavior through the proxy of a finite string machine
description.
Partial decidability is not a hard problem.
/Flibble
My point is that all of the halting problem proofs
are wrong when they require a Turing machine decider
H to report on the behavior of machine M on input i
because machine M is not in the domain of any Turing
machine decider. Only finite strings such as ⟨M⟩ the
Turing machine description of machine M are its
domain.
Definition of Turing Machine Ĥ
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
if Ĥ applied to ⟨Ĥ⟩ halts, and // incorrect requirement
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
if Ĥ applied to ⟨Ĥ⟩ does not halt. // incorrect requirement
(a) Ĥ copies its input ⟨Ĥ⟩
(b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
(c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
(d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
(e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
(f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...
The fact that the correctly simulated input
specifies recursive simulation prevents the
simulated ⟨Ĥ⟩ from ever reaching its simulated
final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.
This is not contradicted by the fact that
Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
the domain of every Turing machine computed function.
In the atypical case where the behavior of the simulation
of an input to a potential halt decider disagrees with the
behavior of the direct execution of the underlying machine
(because this input calls this same simulating decider) it
is the behavior of the input that rules because deciders
compute the mapping *FROM* their inputs.
But the input specifies halting behaviour,
behavior.
On 7/28/2025 3:57 AM, Fred. Zwarts wrote:
Op 27.jul.2025 om 01:43 schreef olcott:
On 7/26/2025 6:35 PM, Richard Damon wrote:
The behavior of an input to a halt decider is DEFINED in all cases
to be the behavior of the machine the input represents,
Yet I have conclusively proven otherwise and
you are too stupid to understand the proof.
That was not a proof, but an assumption with a huge mistake.
https://www.researchgate.net/publication/394042683_ChatGPT_analyzes_HHHDDD
ChatGPT agrees that HHH(DDD)==0 is correct even though
DDD() halts.
On 7/28/2025 9:36 PM, Richard Damon wrote:
On 7/28/25 10:32 PM, olcott wrote:
On 7/28/2025 9:00 PM, Richard Damon wrote:
On 7/28/25 7:44 PM, olcott wrote:
On 7/28/2025 6:26 PM, Richard Damon wrote:
On 7/28/25 8:34 AM, olcott wrote:
void DDD()
{
HHH(DDD);
return;
}
That you are too stupid to understand that DDD simulated
by HHH does call HHH in recursive emulation even after
I have provided fully operational code of DDD calling
HHH(DDD) in recursive emulation *IS NOT A REBUTTAL*
https://github.com/plolcott/x86utm/blob/master/Halt7.c
What you are too stupid to understand is that while the *PROGRAM* >>>>>> HHH, which does the specific actions it is defined to, when it
simulates the input that represents the *PROGRAM* DDD, which by
definition includes the code of the HHH that it is built on, that >>>>>> will not reach the final state.
HHH correctly predicts that DDD correctly simulated
by HHH cannot possibly reach its simulated "return"
statement final halt state. This is because DDD does
call HHH(DDD) in recursive simulation.
Can't do that, as HHH doesn't correct simulate its input, since
correct simulation requires being complete.
Never heard of mathematical induction?
You don't have a valid induction. The problem is every version of HHH
gets a different version of DDD, so you can't build the induction, as
the n and n+1 steps don't relate.
The only difference in the elements of the infinite
set of HHH/DDD pairs where HHH emulates N instructions
of DDD cannot possibly have any effect on whether this
DDD instance reaches its "return" instruction final
halt state *AND YOU HAVE ALWAYS KNOWN THAT*
On 7/28/2025 9:36 PM, Richard Damon wrote:
On 7/28/25 10:32 PM, olcott wrote:
On 7/28/2025 9:00 PM, Richard Damon wrote:
On 7/28/25 7:44 PM, olcott wrote:
On 7/28/2025 6:26 PM, Richard Damon wrote:
On 7/28/25 8:34 AM, olcott wrote:
void DDD()
{
HHH(DDD);
return;
}
That you are too stupid to understand that DDD simulated
by HHH does call HHH in recursive emulation even after
I have provided fully operational code of DDD calling
HHH(DDD) in recursive emulation *IS NOT A REBUTTAL*
https://github.com/plolcott/x86utm/blob/master/Halt7.c
What you are too stupid to understand is that while the *PROGRAM* >>>>>> HHH, which does the specific actions it is defined to, when it
simulates the input that represents the *PROGRAM* DDD, which by
definition includes the code of the HHH that it is built on, that >>>>>> will not reach the final state.
HHH correctly predicts that DDD correctly simulated
by HHH cannot possibly reach its simulated "return"
statement final halt state. This is because DDD does
call HHH(DDD) in recursive simulation.
Can't do that, as HHH doesn't correct simulate its input, since
correct simulation requires being complete.
Never heard of mathematical induction?
You don't have a valid induction. The problem is every version of HHH
gets a different version of DDD, so you can't build the induction, as
the n and n+1 steps don't relate.
The only difference in the elements of the infinite
set of HHH/DDD pairs where HHH emulates N instructions
of DDD cannot possibly have any effect on whether this
DDD instance reaches its "return" instruction final
halt state *AND YOU HAVE ALWAYS KNOWN THAT*
On 7/28/25 7:58 PM, olcott wrote:
On 7/28/2025 6:49 PM, Richard Damon wrote:
On 7/28/25 7:20 PM, olcott wrote:*That has always been the fatal flaw of all of the proofs*
On 7/28/2025 5:57 PM, Richard Damon wrote:
On 7/28/25 9:54 AM, olcott wrote:What DDD does is keep calling HHH(DDD) in recursive simulation until
On 7/28/2025 8:21 AM, joes wrote:How is it a "correct prediction" if it sees something different than >>>>> what that DDD does.
Am Mon, 28 Jul 2025 07:11:11 -0500 schrieb olcott:All halt deciders are required to predict the behavior of their
On 7/28/2025 2:30 AM, joes wrote:
Am Sun, 27 Jul 2025 21:58:05 -0500 schrieb olcott:
On 7/27/2025 9:48 PM, Richard Damon wrote:
On 7/27/25 8:20 PM, olcott wrote:
HHH is never changed.Yeah, so when you change HHH to abort later, you also change >>>>>>>>> DDD.By itself I mean the exact same machine code bytes at the exact >>>>>>>>>> same machine address.When DDD is emulated by HHH1 it need not emulate itself at >>>>>>>>>>>> all.But "itself" doesn't matter to x86 instructions,
It is changed in the hypothetical unaborted simulation. HHH is
reporting on UTM(HHH', DDD) where HHH' calls UTM(DDD), and not on >>>>>>> the halting DDD,
and definitely not on HHH(DDD), itself.
input. HHH does correctly predict that DDD correctly simulated by
HHH cannot possibly reach its own simulated "return" instruction
final halt state.
HHH kills this whole process.
But the behavior of the program continues past that (something you
don't seem to understand) and that behavior will also have its HHH
terminate the DDD it is simulating and return 0 to DDD and then Halt.
Your problem is you don't understand that the simulating HHH doesn't
define the behavior of DDD, it is the execution of DDD that defines
what a correct simulation of it is.
Remember, to have simulated that DDD, it must have include the codeWe are not asking: Does DDD() halt.
of the HHH that it was based on, which is the HHH that made the
prediction, and thus returns 0, so DDD will halt.
That is (as it turns out) an incorrect question.
No, that is EXACTLY the question.
I guess you are just admitting that you whole world is based on LYING
about what things are supposed to be.
Turing machines cannot directly report on the behavior of other
Turing machines they can at best indirectly report on the behavior of
Turing machines through the proxy of finite string machine
descriptions such as ⟨M⟩.
Right, and HHH was given the equivalenet of (M) by being given the
code of *ALL* of DDD
I guess you don't understand that fact, even though you CLAIM the
input is the proper representation of DDD.
Thus the behavior specified by the input finite string overrules and
supersedes the behavior of the direct execution.
No, it is DEFINED to be the behavior of the direct execution of the
program it represent.
No, your failure to follow the rules is what makes you just a liar.
On 7/28/25 7:58 PM, olcott wrote:
On 7/28/2025 6:49 PM, Richard Damon wrote:
On 7/28/25 7:20 PM, olcott wrote:
On 7/28/2025 5:57 PM, Richard Damon wrote:
On 7/28/25 9:54 AM, olcott wrote:
On 7/28/2025 8:21 AM, joes wrote:
Am Mon, 28 Jul 2025 07:11:11 -0500 schrieb olcott:
On 7/28/2025 2:30 AM, joes wrote:
Am Sun, 27 Jul 2025 21:58:05 -0500 schrieb olcott:
On 7/27/2025 9:48 PM, Richard Damon wrote:
On 7/27/25 8:20 PM, olcott wrote:
Yeah, so when you change HHH to abort later, you also change DDD. >>>>>>>> HHH is never changed.By itself I mean the exact same machine code bytes at the >>>>>>>>>> exact sameWhen DDD is emulated by HHH1 it need not emulate itself at all. >>>>>>>>>>> But "itself" doesn't matter to x86 instructions,
machine address.
It is changed in the hypothetical unaborted simulation. HHH is
reporting
on UTM(HHH', DDD) where HHH' calls UTM(DDD), and not on the
halting DDD,
and definitely not on HHH(DDD), itself.
All halt deciders are required to predict the behavior
of their input. HHH does correctly predict that DDD correctly
simulated by HHH cannot possibly reach its own simulated
"return" instruction final halt state.
How is it a "correct prediction" if it sees something different
than what that DDD does.
What DDD does is keep calling HHH(DDD) in recursive
simulation until HHH kills this whole process.
But the behavior of the program continues past that (something you
don't seem to understand) and that behavior will also have its HHH
terminate the DDD it is simulating and return 0 to DDD and then Halt.
Your problem is you don't understand that the simulating HHH doesn't
define the behavior of DDD, it is the execution of DDD that defines
what a correct simulation of it is.
Remember, to have simulated that DDD, it must have include the code >>>>> of the HHH that it was based on, which is the HHH that made the
prediction, and thus returns 0, so DDD will halt.
We are not asking: Does DDD() halt.
That is (as it turns out) an incorrect question.
No, that is EXACTLY the question.
I guess you are just admitting that you whole world is based on LYING
about what things are supposed to be.
Turing machines cannot directly report on the behavior
of other Turing machines they can at best indirectly
report on the behavior of Turing machines through the
proxy of finite string machine descriptions such as ⟨M⟩.
Right, and HHH was given the equivalenet of (M) by being given the
code of *ALL* of DDD
I guess you don't understand that fact, even though you CLAIM the
input is the proper representation of DDD.
Thus the behavior specified by the input finite string
overrules and supersedes the behavior of the direct
execution.
No, it is DEFINED to be the behavior of the direct execution of the
program it represent.
*That has always been the fatal flaw of all of the proofs*
No, your failure to follow the rules is what makes you just a liar.
On Mon, 28 Jul 2025 21:56:16 -0400, Richard Damon wrote:
On 7/28/25 7:58 PM, olcott wrote:
On 7/28/2025 6:49 PM, Richard Damon wrote:
On 7/28/25 7:20 PM, olcott wrote:*That has always been the fatal flaw of all of the proofs*
On 7/28/2025 5:57 PM, Richard Damon wrote:
On 7/28/25 9:54 AM, olcott wrote:What DDD does is keep calling HHH(DDD) in recursive simulation until >>>>> HHH kills this whole process.
On 7/28/2025 8:21 AM, joes wrote:How is it a "correct prediction" if it sees something different than >>>>>> what that DDD does.
Am Mon, 28 Jul 2025 07:11:11 -0500 schrieb olcott:All halt deciders are required to predict the behavior of their
On 7/28/2025 2:30 AM, joes wrote:
Am Sun, 27 Jul 2025 21:58:05 -0500 schrieb olcott:
On 7/27/2025 9:48 PM, Richard Damon wrote:
On 7/27/25 8:20 PM, olcott wrote:
HHH is never changed.Yeah, so when you change HHH to abort later, you also change >>>>>>>>>> DDD.By itself I mean the exact same machine code bytes at the exact >>>>>>>>>>> same machine address.When DDD is emulated by HHH1 it need not emulate itself at >>>>>>>>>>>>> all.But "itself" doesn't matter to x86 instructions,
It is changed in the hypothetical unaborted simulation. HHH is >>>>>>>> reporting on UTM(HHH', DDD) where HHH' calls UTM(DDD), and not on >>>>>>>> the halting DDD,
and definitely not on HHH(DDD), itself.
input. HHH does correctly predict that DDD correctly simulated by >>>>>>> HHH cannot possibly reach its own simulated "return" instruction >>>>>>> final halt state.
But the behavior of the program continues past that (something you
don't seem to understand) and that behavior will also have its HHH
terminate the DDD it is simulating and return 0 to DDD and then Halt.
Your problem is you don't understand that the simulating HHH doesn't
define the behavior of DDD, it is the execution of DDD that defines
what a correct simulation of it is.
Remember, to have simulated that DDD, it must have include the code >>>>>> of the HHH that it was based on, which is the HHH that made theWe are not asking: Does DDD() halt.
prediction, and thus returns 0, so DDD will halt.
That is (as it turns out) an incorrect question.
No, that is EXACTLY the question.
I guess you are just admitting that you whole world is based on LYING
about what things are supposed to be.
Turing machines cannot directly report on the behavior of other
Turing machines they can at best indirectly report on the behavior of >>>>> Turing machines through the proxy of finite string machine
descriptions such as ⟨M⟩.
Right, and HHH was given the equivalenet of (M) by being given the
code of *ALL* of DDD
I guess you don't understand that fact, even though you CLAIM the
input is the proper representation of DDD.
Thus the behavior specified by the input finite string overrules and >>>>> supersedes the behavior of the direct execution.
No, it is DEFINED to be the behavior of the direct execution of the
program it represent.
No, your failure to follow the rules is what makes you just a liar.
Yet another ad hominem attack!
/Flibble
On 7/28/2025 8:56 PM, Richard Damon wrote:
On 7/28/25 7:58 PM, olcott wrote:
On 7/28/2025 6:49 PM, Richard Damon wrote:
On 7/28/25 7:20 PM, olcott wrote:
On 7/28/2025 5:57 PM, Richard Damon wrote:
On 7/28/25 9:54 AM, olcott wrote:
On 7/28/2025 8:21 AM, joes wrote:
Am Mon, 28 Jul 2025 07:11:11 -0500 schrieb olcott:
On 7/28/2025 2:30 AM, joes wrote:
Am Sun, 27 Jul 2025 21:58:05 -0500 schrieb olcott:
On 7/27/2025 9:48 PM, Richard Damon wrote:
On 7/27/25 8:20 PM, olcott wrote:
Yeah, so when you change HHH to abort later, you also change DDD. >>>>>>>>> HHH is never changed.By itself I mean the exact same machine code bytes at the >>>>>>>>>>> exact sameWhen DDD is emulated by HHH1 it need not emulate itself at >>>>>>>>>>>>> all.But "itself" doesn't matter to x86 instructions,
machine address.
It is changed in the hypothetical unaborted simulation. HHH is >>>>>>>> reporting
on UTM(HHH', DDD) where HHH' calls UTM(DDD), and not on the
halting DDD,
and definitely not on HHH(DDD), itself.
All halt deciders are required to predict the behavior
of their input. HHH does correctly predict that DDD correctly
simulated by HHH cannot possibly reach its own simulated
"return" instruction final halt state.
How is it a "correct prediction" if it sees something different
than what that DDD does.
What DDD does is keep calling HHH(DDD) in recursive
simulation until HHH kills this whole process.
But the behavior of the program continues past that (something you
don't seem to understand) and that behavior will also have its HHH
terminate the DDD it is simulating and return 0 to DDD and then Halt.
Your problem is you don't understand that the simulating HHH doesn't
define the behavior of DDD, it is the execution of DDD that defines
what a correct simulation of it is.
Remember, to have simulated that DDD, it must have include the
code of the HHH that it was based on, which is the HHH that made
the prediction, and thus returns 0, so DDD will halt.
We are not asking: Does DDD() halt.
That is (as it turns out) an incorrect question.
No, that is EXACTLY the question.
I guess you are just admitting that you whole world is based on
LYING about what things are supposed to be.
Turing machines cannot directly report on the behavior
of other Turing machines they can at best indirectly
report on the behavior of Turing machines through the
proxy of finite string machine descriptions such as ⟨M⟩.
Right, and HHH was given the equivalenet of (M) by being given the
code of *ALL* of DDD
I guess you don't understand that fact, even though you CLAIM the
input is the proper representation of DDD.
Thus the behavior specified by the input finite string
overrules and supersedes the behavior of the direct
execution.
No, it is DEFINED to be the behavior of the direct execution of the
program it represent.
*That has always been the fatal flaw of all of the proofs*
No, your failure to follow the rules is what makes you just a liar.
_DDD()
[00002192] 55 push ebp
[00002193] 8bec mov ebp,esp
[00002195] 6892210000 push 00002192 // push DDD
[0000219a] e833f4ffff call 000015d2 // call HHH
[0000219f] 83c404 add esp,+04
[000021a2] 5d pop ebp
[000021a3] c3 ret
Size in bytes:(0018) [000021a3]
When the above code is in the same memory space as HHH
such that DDD calls HHH(DDD) and then HHH does emulate
itself emulating DDD then this does specify recursive
emulation.
Anyone or anything that disagrees would be disagreeing
with the definition of the x86 language.
On 7/29/25 12:53 PM, olcott wrote:
On 7/28/2025 8:56 PM, Richard Damon wrote:
On 7/28/25 7:58 PM, olcott wrote:
On 7/28/2025 6:49 PM, Richard Damon wrote:
On 7/28/25 7:20 PM, olcott wrote:
On 7/28/2025 5:57 PM, Richard Damon wrote:
On 7/28/25 9:54 AM, olcott wrote:
On 7/28/2025 8:21 AM, joes wrote:
Am Mon, 28 Jul 2025 07:11:11 -0500 schrieb olcott:
On 7/28/2025 2:30 AM, joes wrote:
Am Sun, 27 Jul 2025 21:58:05 -0500 schrieb olcott:
On 7/27/2025 9:48 PM, Richard Damon wrote:
On 7/27/25 8:20 PM, olcott wrote:
HHH is never changed.Yeah, so when you change HHH to abort later, you also change >>>>>>>>>>> DDD.By itself I mean the exact same machine code bytes at the >>>>>>>>>>>> exact sameWhen DDD is emulated by HHH1 it need not emulate itself at >>>>>>>>>>>>>> all.But "itself" doesn't matter to x86 instructions,
machine address.
It is changed in the hypothetical unaborted simulation. HHH is >>>>>>>>> reporting
on UTM(HHH', DDD) where HHH' calls UTM(DDD), and not on the >>>>>>>>> halting DDD,
and definitely not on HHH(DDD), itself.
All halt deciders are required to predict the behavior
of their input. HHH does correctly predict that DDD correctly
simulated by HHH cannot possibly reach its own simulated
"return" instruction final halt state.
How is it a "correct prediction" if it sees something different >>>>>>> than what that DDD does.
What DDD does is keep calling HHH(DDD) in recursive
simulation until HHH kills this whole process.
But the behavior of the program continues past that (something you
don't seem to understand) and that behavior will also have its HHH
terminate the DDD it is simulating and return 0 to DDD and then Halt. >>>>>
Your problem is you don't understand that the simulating HHH
doesn't define the behavior of DDD, it is the execution of DDD that >>>>> defines what a correct simulation of it is.
Remember, to have simulated that DDD, it must have include the
code of the HHH that it was based on, which is the HHH that made >>>>>>> the prediction, and thus returns 0, so DDD will halt.
We are not asking: Does DDD() halt.
That is (as it turns out) an incorrect question.
No, that is EXACTLY the question.
I guess you are just admitting that you whole world is based on
LYING about what things are supposed to be.
Turing machines cannot directly report on the behavior
of other Turing machines they can at best indirectly
report on the behavior of Turing machines through the
proxy of finite string machine descriptions such as ⟨M⟩.
Right, and HHH was given the equivalenet of (M) by being given the
code of *ALL* of DDD
I guess you don't understand that fact, even though you CLAIM the
input is the proper representation of DDD.
Thus the behavior specified by the input finite string
overrules and supersedes the behavior of the direct
execution.
No, it is DEFINED to be the behavior of the direct execution of the >>>>> program it represent.
*That has always been the fatal flaw of all of the proofs*
No, your failure to follow the rules is what makes you just a liar.
_DDD()
[00002192] 55 push ebp
[00002193] 8bec mov ebp,esp
[00002195] 6892210000 push 00002192 // push DDD
[0000219a] e833f4ffff call 000015d2 // call HHH
[0000219f] 83c404 add esp,+04
[000021a2] 5d pop ebp
[000021a3] c3 ret
Size in bytes:(0018) [000021a3]
When the above code is in the same memory space as HHH
such that DDD calls HHH(DDD) and then HHH does emulate
itself emulating DDD then this does specify recursive
emulation.
Anyone or anything that disagrees would be disagreeing
with the definition of the x86 language.
So, if HHH accesses that memory, it becomes part of the input.
On 7/29/2025 5:37 PM, Richard Damon wrote:
On 7/29/25 12:53 PM, olcott wrote:
On 7/28/2025 8:56 PM, Richard Damon wrote:
On 7/28/25 7:58 PM, olcott wrote:
On 7/28/2025 6:49 PM, Richard Damon wrote:
On 7/28/25 7:20 PM, olcott wrote:
On 7/28/2025 5:57 PM, Richard Damon wrote:
On 7/28/25 9:54 AM, olcott wrote:
On 7/28/2025 8:21 AM, joes wrote:
Am Mon, 28 Jul 2025 07:11:11 -0500 schrieb olcott:
On 7/28/2025 2:30 AM, joes wrote:
Am Sun, 27 Jul 2025 21:58:05 -0500 schrieb olcott:
On 7/27/2025 9:48 PM, Richard Damon wrote:
On 7/27/25 8:20 PM, olcott wrote:
HHH is never changed.Yeah, so when you change HHH to abort later, you also change >>>>>>>>>>>> DDD.By itself I mean the exact same machine code bytes at the >>>>>>>>>>>>> exact sameWhen DDD is emulated by HHH1 it need not emulate itself >>>>>>>>>>>>>>> at all.But "itself" doesn't matter to x86 instructions,
machine address.
It is changed in the hypothetical unaborted simulation. HHH is >>>>>>>>>> reporting
on UTM(HHH', DDD) where HHH' calls UTM(DDD), and not on the >>>>>>>>>> halting DDD,
and definitely not on HHH(DDD), itself.
All halt deciders are required to predict the behavior
of their input. HHH does correctly predict that DDD correctly >>>>>>>>> simulated by HHH cannot possibly reach its own simulated
"return" instruction final halt state.
How is it a "correct prediction" if it sees something different >>>>>>>> than what that DDD does.
What DDD does is keep calling HHH(DDD) in recursive
simulation until HHH kills this whole process.
But the behavior of the program continues past that (something you >>>>>> don't seem to understand) and that behavior will also have its HHH >>>>>> terminate the DDD it is simulating and return 0 to DDD and then Halt. >>>>>>
Your problem is you don't understand that the simulating HHH
doesn't define the behavior of DDD, it is the execution of DDD
that defines what a correct simulation of it is.
Remember, to have simulated that DDD, it must have include the >>>>>>>> code of the HHH that it was based on, which is the HHH that made >>>>>>>> the prediction, and thus returns 0, so DDD will halt.
We are not asking: Does DDD() halt.
That is (as it turns out) an incorrect question.
No, that is EXACTLY the question.
I guess you are just admitting that you whole world is based on
LYING about what things are supposed to be.
Turing machines cannot directly report on the behavior
of other Turing machines they can at best indirectly
report on the behavior of Turing machines through the
proxy of finite string machine descriptions such as ⟨M⟩.
Right, and HHH was given the equivalenet of (M) by being given the >>>>>> code of *ALL* of DDD
I guess you don't understand that fact, even though you CLAIM the >>>>>> input is the proper representation of DDD.
Thus the behavior specified by the input finite string
overrules and supersedes the behavior of the direct
execution.
No, it is DEFINED to be the behavior of the direct execution of
the program it represent.
*That has always been the fatal flaw of all of the proofs*
No, your failure to follow the rules is what makes you just a liar.
_DDD()
[00002192] 55 push ebp
[00002193] 8bec mov ebp,esp
[00002195] 6892210000 push 00002192 // push DDD
[0000219a] e833f4ffff call 000015d2 // call HHH
[0000219f] 83c404 add esp,+04
[000021a2] 5d pop ebp
[000021a3] c3 ret
Size in bytes:(0018) [000021a3]
When the above code is in the same memory space as HHH
such that DDD calls HHH(DDD) and then HHH does emulate
itself emulating DDD then this does specify recursive
emulation.
Anyone or anything that disagrees would be disagreeing
with the definition of the x86 language.
So, if HHH accesses that memory, it becomes part of the input.
It becomes part of the input in the sense that the
correct simulation of the input to HHH(DDD) is not
the same as the correct simulation of the input to
HHH1(DDD) because DDD only calls HHH(DDD) and does
not call HHH1(DDD).
DDD correctly simulated by HHH cannot possibly
halt thus HHH(DDD)==0 is correct.
DDD correctly simulated by HHH1 does halt thus
HHH(DDD)==1 is correct.
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,064 |
Nodes: | 10 (0 / 10) |
Uptime: | 148:14:23 |
Calls: | 13,691 |
Calls today: | 1 |
Files: | 186,936 |
D/L today: |
33 files (6,120K bytes) |
Messages: | 2,410,932 |