• I have just proven the error of all of the halting problem proofs

    From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Sat Jul 26 12:59:35 2025
    From Newsgroup: comp.ai.philosophy

    On 7/26/2025 12:31 PM, Alan Mackenzie wrote:
    Hello, Ben.

    Ben Bacarisse <ben@bsb.me.uk> wrote:
    Alan Mackenzie <acm@muc.de> writes:

    [ .... ]

    In comp.theory olcott <polcott333@gmail.com> wrote:
    On 7/21/2025 10:52 AM, Alan Mackenzie wrote:
    ...
    More seriously, you told Ben Bacarisse on this newsgroup that you had >>>>> fully worked out turing machines which broke a proof of the Halting
    Theorem. It transpired you were lying.

    Just for the record, here is what PO said late 2018 early 2019:

    On 12/14/2018 5:27 PM, peteolcott wrote that he had

    "encoded all of the exact TMD [Turing Machine Description]
    instructions of the Linz Turing machine H that correctly decides
    halting for its fully encoded input pair: (Ĥ, Ĥ)."

    Date: Sat, 15 Dec 2018 11:03:21 -0600

    "Everyone has claimed that H on input pair (Ĥ, Ĥ) meeting the Linz
    specs does not exist. I now have a fully encoded pair of Turing
    Machines H / Ĥ proving them wrong."

    Date: Sat, 15 Dec 2018 01:28:22 -0600

    "I now have an actual H that decides actual halting for an actual (Ĥ,
    Ĥ) input pair. I have to write the UTM to execute this code, that
    should not take very long. The key thing is the H and Ĥ are 100%
    fully encoded as actual Turing machines."

    Date: Sun, 16 Dec 2018 09:02:50 -0600

    "I am waiting to encode the UTM in C++ so that I can actually execute
    H on the input pair: (Ĥ, Ĥ). This should take a week or two [...] it
    is exactly and precisely the Peter Linz H and Ĥ, with H actually
    deciding input pair: (Ĥ, Ĥ)"

    Date: Fri, 11 Jan 2019 16:24:36 -0600

    "I provide the exact ⊢* wildcard states after the Linz H.q0 and after >> Ĥ.qx (Linz incorrectly uses q0 twice) showing exactly how the actual
    Linz H would correctly decide the actual Linz (Ĥ, Ĥ)."

    Thanks for clarifying that.

    I think I can understand a bit what it must feel like to be on the
    receiving end of all this. Firstly you know through training that what you're being told is definitely false, but on the other hand you don't
    like to believe that somebody is lying; somehow you give them the
    (temporary) benefit of the doubt. Then comes the depressing restoration
    of truth and reality.

    When the topic came up again for
    discussion, you failed to deny writing the original lie.


    That is the closest thing to a lie that I ever said.
    When I said this I was actually meaning that I had
    fully operational C code that is equivalent to a
    Turing Machine.

    I think it was a full blown lie intended to deceive. Did you ever
    apologise to Ben for leading him up the garden path like that?

    No, never. In fact he kept insulting me until it became so egregious
    that I decided to having nothing more to do with him.

    Somehow, that doesn't surprise me. I only post a little on this group
    now (I never really posted much more) for similar reasons. I care about
    the truth, including mathematical truth; although I've never specialised
    in computation theory or mathematical logic, I care when these are
    falsified by ignorant posters.

    What really got my goat this time around was PO stridently and
    hypocritically accusing others of being liars, given his own record.

    What he did do was take months to slowly walk back the claim he made in
    December 2018. H and Ĥ became "virtual machines" and then started to be
    "sufficiently equivalent" to Linz's H and Ĥ rather the "exactly and
    precisely the Peter Linz H and Ĥ". By Sep 2020 he didn't even have it
    anymore:

    "I will soon have a partial halt decider sufficiently equivalent to
    the Linz H correctly deciding halting on the Linz Ĥ"

    It took nearly two years to walk back the clear and explicit claim to
    this vague and ill-defined claim of not having something!

    Yes. I've watched the latter part of this process.

    You have not and never have had "fully operational C code" that breaks a >>> proof of the Halting Theorem. To say you had this, when you clearly
    didn't, was a lie.

    He also tried to pretend that the C code (which, as you say, he didn't
    have) is what he always meant when he wrote the words I quoted above. I
    defy anyone to read those words with PO's later claim that he meant C
    code all along and not conclude that he was just lying again to try to
    save some little face.

    What amazes me is he somehow thinks that theorems don't apply to him.
    Of course, he doesn't understand what a theorem is, somehow construing
    it as somebody's opinion. If it's just opinion, then his contrasting
    opinion must be "just as good". Or something like that.

    C code does not have "TMD instructions" that can be encoded. TMs (as in
    Linz) do. When executed, C code has no "exact ⊢* wildcard states after
    the Linz H.q0" for PO to show. A TM would. C code does not need a UTM
    to execute it (a TM does) and if he really meant that he had C code all
    along, does anyone think he could write a UTM for C in "a week or two"?

    It is so patently obvious that he just had a manic episode in Dec 2018
    that caused he to post all those exuberant claims, and so patently
    obvious that he simply can't admit being wrong about anything that I
    ended up feeling rather sorry for him -- until the insults started up
    again.

    That's another reason I don't post much, here. I really don't feel like being insulted by somebody of PO's intellectual stature.

    Have a good Sunday!

    --
    Ben.


    The error of all of the halting problem proofs is
    that they require a Turing machine halt decider to
    report on the behavior of a directly executed
    Turing machine.

    It is common knowledge that no Turing machine decider
    can take another directly executing Turing machine as
    an input, thus the above requirement is not precisely
    correct.

    When we correct the error of this incorrect requirement
    it becomes a Turing machine decider indirectly reports
    on the behavior of a directly executing Turing machine
    through the proxy of a finite string description of this
    machine.

    Now I have proven and corrected the error of all of the
    halting problem proofs.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alan Mackenzie@acm@muc.de to comp.theory,sci.logic,comp.ai.philosophy on Sat Jul 26 18:30:00 2025
    From Newsgroup: comp.ai.philosophy

    [ Followup-To: set ]
    In comp.theory olcott <polcott333@gmail.com> wrote:
    On 7/26/2025 12:31 PM, Alan Mackenzie wrote:
    Hello, Ben.
    Ben Bacarisse <ben@bsb.me.uk> wrote:
    Alan Mackenzie <acm@muc.de> writes:
    [ .... ]
    In comp.theory olcott <polcott333@gmail.com> wrote:
    On 7/21/2025 10:52 AM, Alan Mackenzie wrote:
    ...
    More seriously, you told Ben Bacarisse on this newsgroup that you had >>>>>> fully worked out turing machines which broke a proof of the Halting >>>>>> Theorem. It transpired you were lying.
    Just for the record, here is what PO said late 2018 early 2019:
    On 12/14/2018 5:27 PM, peteolcott wrote that he had
    "encoded all of the exact TMD [Turing Machine Description]
    instructions of the Linz Turing machine H that correctly decides
    halting for its fully encoded input pair: (Ĥ, Ĥ)."
    Date: Sat, 15 Dec 2018 11:03:21 -0600
    "Everyone has claimed that H on input pair (Ĥ, Ĥ) meeting the Linz
    specs does not exist. I now have a fully encoded pair of Turing
    Machines H / Ĥ proving them wrong."
    Date: Sat, 15 Dec 2018 01:28:22 -0600
    "I now have an actual H that decides actual halting for an actual (Ĥ, >>> Ĥ) input pair. I have to write the UTM to execute this code, that
    should not take very long. The key thing is the H and Ĥ are 100%
    fully encoded as actual Turing machines."
    Date: Sun, 16 Dec 2018 09:02:50 -0600
    "I am waiting to encode the UTM in C++ so that I can actually execute >>> H on the input pair: (Ĥ, Ĥ). This should take a week or two [...] it >>> is exactly and precisely the Peter Linz H and Ĥ, with H actually
    deciding input pair: (Ĥ, Ĥ)"
    Date: Fri, 11 Jan 2019 16:24:36 -0600
    "I provide the exact ⊢* wildcard states after the Linz H.q0 and after >>> Ĥ.qx (Linz incorrectly uses q0 twice) showing exactly how the actual >>> Linz H would correctly decide the actual Linz (Ĥ, Ĥ)."
    Thanks for clarifying that.
    I think I can understand a bit what it must feel like to be on the
    receiving end of all this. Firstly you know through training that what
    you're being told is definitely false, but on the other hand you don't
    like to believe that somebody is lying; somehow you give them the
    (temporary) benefit of the doubt. Then comes the depressing restoration
    of truth and reality.
    When the topic came up again for
    discussion, you failed to deny writing the original lie.
    That is the closest thing to a lie that I ever said.
    When I said this I was actually meaning that I had
    fully operational C code that is equivalent to a
    Turing Machine.
    I think it was a full blown lie intended to deceive. Did you ever
    apologise to Ben for leading him up the garden path like that?
    No, never. In fact he kept insulting me until it became so egregious
    that I decided to having nothing more to do with him.
    Somehow, that doesn't surprise me. I only post a little on this group
    now (I never really posted much more) for similar reasons. I care about
    the truth, including mathematical truth; although I've never specialised
    in computation theory or mathematical logic, I care when these are
    falsified by ignorant posters.
    What really got my goat this time around was PO stridently and
    hypocritically accusing others of being liars, given his own record.
    What he did do was take months to slowly walk back the claim he made in
    December 2018. H and Ĥ became "virtual machines" and then started to be >>> "sufficiently equivalent" to Linz's H and Ĥ rather the "exactly and
    precisely the Peter Linz H and Ĥ". By Sep 2020 he didn't even have it
    anymore:
    "I will soon have a partial halt decider sufficiently equivalent to
    the Linz H correctly deciding halting on the Linz Ĥ"
    It took nearly two years to walk back the clear and explicit claim to
    this vague and ill-defined claim of not having something!
    Yes. I've watched the latter part of this process.
    You have not and never have had "fully operational C code" that breaks a >>>> proof of the Halting Theorem. To say you had this, when you clearly
    didn't, was a lie.
    He also tried to pretend that the C code (which, as you say, he didn't
    have) is what he always meant when he wrote the words I quoted above. I >>> defy anyone to read those words with PO's later claim that he meant C
    code all along and not conclude that he was just lying again to try to
    save some little face.
    What amazes me is he somehow thinks that theorems don't apply to him.
    Of course, he doesn't understand what a theorem is, somehow construing
    it as somebody's opinion. If it's just opinion, then his contrasting
    opinion must be "just as good". Or something like that.
    C code does not have "TMD instructions" that can be encoded. TMs (as in >>> Linz) do. When executed, C code has no "exact ⊢* wildcard states after >>> the Linz H.q0" for PO to show. A TM would. C code does not need a UTM
    to execute it (a TM does) and if he really meant that he had C code all
    along, does anyone think he could write a UTM for C in "a week or two"?
    It is so patently obvious that he just had a manic episode in Dec 2018
    that caused he to post all those exuberant claims, and so patently
    obvious that he simply can't admit being wrong about anything that I
    ended up feeling rather sorry for him -- until the insults started up
    again.
    That's another reason I don't post much, here. I really don't feel like
    being insulted by somebody of PO's intellectual stature.
    Have a good Sunday!
    --
    Ben.
    The error of all of the halting problem proofs is
    that they require a Turing machine halt decider to
    report on the behavior of a directly executed
    Turing machine.
    It is common knowledge that no Turing machine decider
    can take another directly executing Turing machine as
    an input, thus the above requirement is not precisely
    correct.
    When we correct the error of this incorrect requirement
    it becomes a Turing machine decider indirectly reports
    on the behavior of a directly executing Turing machine
    through the proxy of a finite string description of this
    machine.
    Now I have proven and corrected the error of all of the
    halting problem proofs.
    No you haven't, the subject matter is too far beyond your intellectual capacity.
    How about now apologising to Ben for the lies you told him back in 2018
    and 2019, and for all the insults you threw at him? You're in his
    killfile, but I can relay suitable text to him.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --
    Alan Mackenzie (Nuremberg, Germany).
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Sat Jul 26 14:00:57 2025
    From Newsgroup: comp.ai.philosophy

    On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
    In comp.theory olcott <polcott333@gmail.com> wrote:

    The error of all of the halting problem proofs is
    that they require a Turing machine halt decider to
    report on the behavior of a directly executed
    Turing machine.


    Whether or not machine M halts on input i

    It is common knowledge that no Turing machine decider
    can take another directly executing Turing machine as
    an input, thus the above requirement is not precisely
    correct.

    When we correct the error of this incorrect requirement
    it becomes a Turing machine decider indirectly reports
    on the behavior of a directly executing Turing machine
    through the proxy of a finite string description of this
    machine.

    Now I have proven and corrected the error of all of the
    halting problem proofs.

    No you haven't, the subject matter is too far beyond your intellectual capacity.


    If that was true then you could point out at least one
    single error in EXACTLY what I just said immediately above.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Sat Jul 26 14:26:27 2025
    From Newsgroup: comp.ai.philosophy

    On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
    In comp.theory olcott <polcott333@gmail.com> wrote:

    The error of all of the halting problem proofs is
    that they require a Turing machine halt decider to
    report on the behavior of a directly executed
    Turing machine.

    It is common knowledge that no Turing machine decider
    can take another directly executing Turing machine as
    an input, thus the above requirement is not precisely
    correct.

    When we correct the error of this incorrect requirement
    it becomes a Turing machine decider indirectly reports
    on the behavior of a directly executing Turing machine
    through the proxy of a finite string description of this
    machine.

    Now I have proven and corrected the error of all of the
    halting problem proofs.

    No you haven't, the subject matter is too far beyond your intellectual capacity.


    It only seems to you that I lack understanding because
    you are so sure that I must be wrong that you make sure
    to totally ignore the subtle nuances of meaning that proves
    I am correct.

    No Turing machine based (at least partial) halt decider
    can possibly *directly* report on the behavior of any
    directly executing Turing machine. The best that any
    of them can possibly do is indirectly report on this behavior
    through the proxy of a finite string machine description.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mr Flibble@flibble@red-dwarf.jmc.corp to comp.theory,sci.logic,comp.ai.philosophy on Sat Jul 26 19:52:05 2025
    From Newsgroup: comp.ai.philosophy

    On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:

    On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
    In comp.theory olcott <polcott333@gmail.com> wrote:

    The error of all of the halting problem proofs is that they require a
    Turing machine halt decider to report on the behavior of a directly
    executed Turing machine.

    It is common knowledge that no Turing machine decider can take another
    directly executing Turing machine as an input, thus the above
    requirement is not precisely correct.

    When we correct the error of this incorrect requirement it becomes a
    Turing machine decider indirectly reports on the behavior of a
    directly executing Turing machine through the proxy of a finite string
    description of this machine.

    Now I have proven and corrected the error of all of the halting
    problem proofs.

    No you haven't, the subject matter is too far beyond your intellectual
    capacity.


    It only seems to you that I lack understanding because you are so sure
    that I must be wrong that you make sure to totally ignore the subtle
    nuances of meaning that proves I am correct.

    No Turing machine based (at least partial) halt decider can possibly *directly* report on the behavior of any directly executing Turing
    machine. The best that any of them can possibly do is indirectly report
    on this behavior through the proxy of a finite string machine
    description.

    Partial decidability is not a hard problem.

    /Flibble
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Sat Jul 26 14:58:21 2025
    From Newsgroup: comp.ai.philosophy

    On 7/26/2025 2:52 PM, Mr Flibble wrote:
    On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:

    On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
    In comp.theory olcott <polcott333@gmail.com> wrote:

    The error of all of the halting problem proofs is that they require a
    Turing machine halt decider to report on the behavior of a directly
    executed Turing machine.

    It is common knowledge that no Turing machine decider can take another >>>> directly executing Turing machine as an input, thus the above
    requirement is not precisely correct.

    When we correct the error of this incorrect requirement it becomes a
    Turing machine decider indirectly reports on the behavior of a
    directly executing Turing machine through the proxy of a finite string >>>> description of this machine.

    Now I have proven and corrected the error of all of the halting
    problem proofs.

    No you haven't, the subject matter is too far beyond your intellectual
    capacity.


    It only seems to you that I lack understanding because you are so sure
    that I must be wrong that you make sure to totally ignore the subtle
    nuances of meaning that proves I am correct.

    No Turing machine based (at least partial) halt decider can possibly
    *directly* report on the behavior of any directly executing Turing
    machine. The best that any of them can possibly do is indirectly report
    on this behavior through the proxy of a finite string machine
    description.

    Partial decidability is not a hard problem.

    /Flibble

    My point is that all of the halting problem proofs
    are wrong when they require a Turing machine decider
    H to report on the behavior of machine M on input i
    because machine M is not in the domain of any Turing
    machine decider. Only finite strings such as ⟨M⟩ the
    Turing machine description of machine M are its
    domain.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Sat Jul 26 17:49:48 2025
    From Newsgroup: comp.ai.philosophy

    On 7/26/2025 2:58 PM, olcott wrote:
    On 7/26/2025 2:52 PM, Mr Flibble wrote:
    On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:

    On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
    In comp.theory olcott <polcott333@gmail.com> wrote:

    The error of all of the halting problem proofs is that they require a >>>>> Turing machine halt decider to report on the behavior of a directly
    executed Turing machine.

    It is common knowledge that no Turing machine decider can take another >>>>> directly executing Turing machine as an input, thus the above
    requirement is not precisely correct.

    When we correct the error of this incorrect requirement it becomes a >>>>> Turing machine decider indirectly reports on the behavior of a
    directly executing Turing machine through the proxy of a finite string >>>>> description of this machine.

    Now I have proven and corrected the error of all of the halting
    problem proofs.

    No you haven't, the subject matter is too far beyond your intellectual >>>> capacity.


    It only seems to you that I lack understanding because you are so sure
    that I must be wrong that you make sure to totally ignore the subtle
    nuances of meaning that proves I am correct.

    No Turing machine based (at least partial) halt decider can possibly
    *directly* report on the behavior of any directly executing Turing
    machine.  The best that any of them can possibly do is indirectly report >>> on this behavior through the proxy of a finite string machine
    description.

    Partial decidability is not a hard problem.

    /Flibble

    My point is that all of the halting problem proofs
    are wrong when they require a Turing machine decider
    H to report on the behavior of machine M on input i
    because machine M is not in the domain of any Turing
    machine decider. Only finite strings such as ⟨M⟩ the
    Turing machine description of machine M are its
    domain.


    Definition of Turing Machine Ĥ
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
    if Ĥ applied to ⟨Ĥ⟩ halts, and // incorrect requirement
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
    if Ĥ applied to ⟨Ĥ⟩ does not halt. // incorrect requirement

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...

    The fact that the correctly simulated input
    specifies recursive simulation prevents the
    simulated ⟨Ĥ⟩ from ever reaching its simulated
    final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.

    This is not contradicted by the fact that
    Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
    the domain of every Turing machine computed function.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Sat Jul 26 18:08:21 2025
    From Newsgroup: comp.ai.philosophy

    On 7/26/2025 5:49 PM, olcott wrote:
    On 7/26/2025 2:58 PM, olcott wrote:
    On 7/26/2025 2:52 PM, Mr Flibble wrote:
    On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:

    On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
    In comp.theory olcott <polcott333@gmail.com> wrote:

    The error of all of the halting problem proofs is that they require a >>>>>> Turing machine halt decider to report on the behavior of a directly >>>>>> executed Turing machine.

    It is common knowledge that no Turing machine decider can take
    another
    directly executing Turing machine as an input, thus the above
    requirement is not precisely correct.

    When we correct the error of this incorrect requirement it becomes a >>>>>> Turing machine decider indirectly reports on the behavior of a
    directly executing Turing machine through the proxy of a finite
    string
    description of this machine.

    Now I have proven and corrected the error of all of the halting
    problem proofs.

    No you haven't, the subject matter is too far beyond your intellectual >>>>> capacity.


    It only seems to you that I lack understanding because you are so sure >>>> that I must be wrong that you make sure to totally ignore the subtle
    nuances of meaning that proves I am correct.

    No Turing machine based (at least partial) halt decider can possibly
    *directly* report on the behavior of any directly executing Turing
    machine.  The best that any of them can possibly do is indirectly
    report
    on this behavior through the proxy of a finite string machine
    description.

    Partial decidability is not a hard problem.

    /Flibble

    My point is that all of the halting problem proofs
    are wrong when they require a Turing machine decider
    H to report on the behavior of machine M on input i
    because machine M is not in the domain of any Turing
    machine decider. Only finite strings such as ⟨M⟩ the
    Turing machine description of machine M are its
    domain.


    Definition of Turing Machine Ĥ
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
      if Ĥ applied to ⟨Ĥ⟩ halts, and        // incorrect requirement
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
      if Ĥ applied to ⟨Ĥ⟩ does not halt.    // incorrect requirement

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...

    The fact that the correctly simulated input
    specifies recursive simulation prevents the
    simulated ⟨Ĥ⟩ from ever reaching its simulated
    final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.

    This is not contradicted by the fact that
    Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
    the domain of every Turing machine computed function.


    In the atypical case where the behavior of the simulation
    of an input to a potential halt decider disagrees with the
    behavior of the direct execution of the underlying machine
    (because this input calls this same simulating decider) it
    is the behavior of the input that rules because deciders
    compute the mapping for their inputs.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Sat Jul 26 19:16:32 2025
    From Newsgroup: comp.ai.philosophy

    On 7/26/25 1:59 PM, olcott wrote:
    On 7/26/2025 12:31 PM, Alan Mackenzie wrote:
    Hello, Ben.

    Ben Bacarisse <ben@bsb.me.uk> wrote:
    Alan Mackenzie <acm@muc.de> writes:

    [ .... ]

    In comp.theory olcott <polcott333@gmail.com> wrote:
    On 7/21/2025 10:52 AM, Alan Mackenzie wrote:
    ...
    More seriously, you told Ben Bacarisse on this newsgroup that you had >>>>>> fully worked out turing machines which broke a proof of the Halting >>>>>> Theorem.  It transpired you were lying.

    Just for the record, here is what PO said late 2018 early 2019:

    On 12/14/2018 5:27 PM, peteolcott wrote that he had

       "encoded all of the exact TMD [Turing Machine Description]
       instructions of the Linz Turing machine H that correctly decides
       halting for its fully encoded input pair: (Ĥ, Ĥ)."

    Date: Sat, 15 Dec 2018 11:03:21 -0600

       "Everyone has claimed that H on input pair (Ĥ, Ĥ) meeting the Linz >>>    specs does not exist. I now have a fully encoded pair of Turing
       Machines H / Ĥ proving them wrong."

    Date: Sat, 15 Dec 2018 01:28:22 -0600

       "I now have an actual H that decides actual halting for an actual (Ĥ, >>>    Ĥ) input pair.  I have to write the UTM to execute this code, that >>>    should not take very long.  The key thing is the H and Ĥ are 100% >>>    fully encoded as actual Turing machines."

    Date: Sun, 16 Dec 2018 09:02:50 -0600

       "I am waiting to encode the UTM in C++ so that I can actually execute >>>    H on the input pair: (Ĥ, Ĥ). This should take a week or two [...] it >>>    is exactly and precisely the Peter Linz H and Ĥ, with H actually
       deciding input pair: (Ĥ, Ĥ)"

    Date: Fri, 11 Jan 2019 16:24:36 -0600

       "I provide the exact ⊢* wildcard states after the Linz H.q0 and after
       Ĥ.qx (Linz incorrectly uses q0 twice) showing exactly how the actual >>>    Linz H would correctly decide the actual Linz (Ĥ, Ĥ)."

    Thanks for clarifying that.

    I think I can understand a bit what it must feel like to be on the
    receiving end of all this.  Firstly you know through training that what
    you're being told is definitely false, but on the other hand you don't
    like to believe that somebody is lying; somehow you give them the
    (temporary) benefit of the doubt.  Then comes the depressing restoration
    of truth and reality.

    When the topic came up again for
    discussion, you failed to deny writing the original lie.


    That is the closest thing to a lie that I ever said.
    When I said this I was actually meaning that I had
    fully operational C code that is equivalent to a
    Turing Machine.

    I think it was a full blown lie intended to deceive.  Did you ever
    apologise to Ben for leading him up the garden path like that?

    No, never.  In fact he kept insulting me until it became so egregious
    that I decided to having nothing more to do with him.

    Somehow, that doesn't surprise me.  I only post a little on this group
    now (I never really posted much more) for similar reasons.  I care about
    the truth, including mathematical truth; although I've never specialised
    in computation theory or mathematical logic, I care when these are
    falsified by ignorant posters.

    What really got my goat this time around was PO stridently and
    hypocritically accusing others of being liars, given his own record.

    What he did do was take months to slowly walk back the claim he made in
    December 2018.  H and Ĥ became "virtual machines" and then started to be >>> "sufficiently equivalent" to Linz's H and Ĥ rather the "exactly and
    precisely the Peter Linz H and Ĥ".  By Sep 2020 he didn't even have it >>> anymore:

       "I will soon have a partial halt decider sufficiently equivalent to >>>    the Linz H correctly deciding halting on the Linz Ĥ"

    It took nearly two years to walk back the clear and explicit claim to
    this vague and ill-defined claim of not having something!

    Yes.  I've watched the latter part of this process.

    You have not and never have had "fully operational C code" that
    breaks a
    proof of the Halting Theorem.  To say you had this, when you clearly
    didn't, was a lie.

    He also tried to pretend that the C code (which, as you say, he didn't
    have) is what he always meant when he wrote the words I quoted above.  I >>> defy anyone to read those words with PO's later claim that he meant C
    code all along and not conclude that he was just lying again to try to
    save some little face.

    What amazes me is he somehow thinks that theorems don't apply to him.
    Of course, he doesn't understand what a theorem is, somehow construing
    it as somebody's opinion.  If it's just opinion, then his contrasting
    opinion must be "just as good".  Or something like that.

    C code does not have "TMD instructions" that can be encoded.  TMs (as in >>> Linz) do.  When executed, C code has no "exact ⊢* wildcard states after >>> the Linz H.q0" for PO to show.  A TM would.  C code does not need a UTM >>> to execute it (a TM does) and if he really meant that he had C code all
    along, does anyone think he could write a UTM for C in "a week or two"?

    It is so patently obvious that he just had a manic episode in Dec 2018
    that caused he to post all those exuberant claims, and so patently
    obvious that he simply can't admit being wrong about anything that I
    ended up feeling rather sorry for him -- until the insults started up
    again.

    That's another reason I don't post much, here.  I really don't feel like
    being insulted by somebody of PO's intellectual stature.

    Have a good Sunday!

    --
    Ben.


    The error of all of the halting problem proofs is
    that they require a Turing machine halt decider to
    report on the behavior of a directly executed
    Turing machine.

    Because that *IS* the definition of the problem.


    It is common knowledge that no Turing machine decider
    can take another directly executing Turing machine as
    an input, thus the above requirement is not precisely
    correct.

    And where is that "common knowledge" comming from. Since we CAN
    represent a Turing Machine with enough detail for a UTM to recreate the behavior, the requirement is valid.


    When we correct the error of this incorrect requirement
    it becomes a Turing machine decider indirectly reports
    on the behavior of a directly executing Turing machine
    through the proxy of a finite string description of this
    machine.

    But the only "error" is you LIE that a Turing Machine can report on
    behavior of the machine properly represented to it.

    ALl you are doing is proving you are just a stupid pathological liar.

    The fact that you are don't understand how to do it, doesn't mean it
    can't be done. It just proves your stupidity, since it has been proven
    that it can be done.

    What can't be done is to determine that behavior that requires an
    unbounded number of steps to be shown, can necessarily be determined in
    a bounded number of steps.


    Now I have proven and corrected the error of all of the
    halting problem proofs.



    No, you have proven that you have no idea what is correct and what is
    false, and thus turned yourself into a pathological liar, a liar that
    has lost the ability to tell the differene between truth and lies.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Sat Jul 26 19:18:03 2025
    From Newsgroup: comp.ai.philosophy

    On 7/26/25 3:58 PM, olcott wrote:
    On 7/26/2025 2:52 PM, Mr Flibble wrote:
    On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:

    On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
    In comp.theory olcott <polcott333@gmail.com> wrote:

    The error of all of the halting problem proofs is that they require a >>>>> Turing machine halt decider to report on the behavior of a directly
    executed Turing machine.

    It is common knowledge that no Turing machine decider can take another >>>>> directly executing Turing machine as an input, thus the above
    requirement is not precisely correct.

    When we correct the error of this incorrect requirement it becomes a >>>>> Turing machine decider indirectly reports on the behavior of a
    directly executing Turing machine through the proxy of a finite string >>>>> description of this machine.

    Now I have proven and corrected the error of all of the halting
    problem proofs.

    No you haven't, the subject matter is too far beyond your intellectual >>>> capacity.


    It only seems to you that I lack understanding because you are so sure
    that I must be wrong that you make sure to totally ignore the subtle
    nuances of meaning that proves I am correct.

    No Turing machine based (at least partial) halt decider can possibly
    *directly* report on the behavior of any directly executing Turing
    machine.  The best that any of them can possibly do is indirectly report >>> on this behavior through the proxy of a finite string machine
    description.

    Partial decidability is not a hard problem.

    /Flibble

    My point is that all of the halting problem proofs
    are wrong when they require a Turing machine decider
    H to report on the behavior of machine M on input i
    because machine M is not in the domain of any Turing
    machine decider. Only finite strings such as ⟨M⟩ the
    Turing machine description of machine M are its
    domain.


    No, you just prove that you are too stupid to understand how
    representations work.

    As has been pointed out, without representation, you can't do anything
    you normally use your computer for, so it is essential to understand it.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Sat Jul 26 18:28:48 2025
    From Newsgroup: comp.ai.philosophy

    On 7/26/2025 5:49 PM, olcott wrote:
    On 7/26/2025 2:58 PM, olcott wrote:
    On 7/26/2025 2:52 PM, Mr Flibble wrote:
    On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:

    On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
    In comp.theory olcott <polcott333@gmail.com> wrote:

    The error of all of the halting problem proofs is that they require a >>>>>> Turing machine halt decider to report on the behavior of a directly >>>>>> executed Turing machine.

    It is common knowledge that no Turing machine decider can take
    another
    directly executing Turing machine as an input, thus the above
    requirement is not precisely correct.

    When we correct the error of this incorrect requirement it becomes a >>>>>> Turing machine decider indirectly reports on the behavior of a
    directly executing Turing machine through the proxy of a finite
    string
    description of this machine.

    Now I have proven and corrected the error of all of the halting
    problem proofs.

    No you haven't, the subject matter is too far beyond your intellectual >>>>> capacity.


    It only seems to you that I lack understanding because you are so sure >>>> that I must be wrong that you make sure to totally ignore the subtle
    nuances of meaning that proves I am correct.

    No Turing machine based (at least partial) halt decider can possibly
    *directly* report on the behavior of any directly executing Turing
    machine.  The best that any of them can possibly do is indirectly
    report
    on this behavior through the proxy of a finite string machine
    description.

    Partial decidability is not a hard problem.

    /Flibble

    My point is that all of the halting problem proofs
    are wrong when they require a Turing machine decider
    H to report on the behavior of machine M on input i
    because machine M is not in the domain of any Turing
    machine decider. Only finite strings such as ⟨M⟩ the
    Turing machine description of machine M are its
    domain.


    Definition of Turing Machine Ĥ
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
      if Ĥ applied to ⟨Ĥ⟩ halts, and        // incorrect requirement
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
      if Ĥ applied to ⟨Ĥ⟩ does not halt.    // incorrect requirement

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...

    The fact that the correctly simulated input
    specifies recursive simulation prevents the
    simulated ⟨Ĥ⟩ from ever reaching its simulated
    final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.

    This is not contradicted by the fact that
    Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
    the domain of every Turing machine computed function.


    In the atypical case where the behavior of the simulation
    of an input to a potential halt decider disagrees with the
    behavior of the direct execution of the underlying machine
    (because this input calls this same simulating decider) it
    is the behavior of the input that rules because deciders
    compute the mapping *FROM* their inputs.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Sat Jul 26 18:30:29 2025
    From Newsgroup: comp.ai.philosophy

    On 7/26/2025 6:18 PM, Richard Damon wrote:
    On 7/26/25 3:58 PM, olcott wrote:
    On 7/26/2025 2:52 PM, Mr Flibble wrote:
    On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:

    On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
    In comp.theory olcott <polcott333@gmail.com> wrote:

    The error of all of the halting problem proofs is that they require a >>>>>> Turing machine halt decider to report on the behavior of a directly >>>>>> executed Turing machine.

    It is common knowledge that no Turing machine decider can take
    another
    directly executing Turing machine as an input, thus the above
    requirement is not precisely correct.

    When we correct the error of this incorrect requirement it becomes a >>>>>> Turing machine decider indirectly reports on the behavior of a
    directly executing Turing machine through the proxy of a finite
    string
    description of this machine.

    Now I have proven and corrected the error of all of the halting
    problem proofs.

    No you haven't, the subject matter is too far beyond your intellectual >>>>> capacity.


    It only seems to you that I lack understanding because you are so sure >>>> that I must be wrong that you make sure to totally ignore the subtle
    nuances of meaning that proves I am correct.

    No Turing machine based (at least partial) halt decider can possibly
    *directly* report on the behavior of any directly executing Turing
    machine.  The best that any of them can possibly do is indirectly
    report
    on this behavior through the proxy of a finite string machine
    description.

    Partial decidability is not a hard problem.

    /Flibble

    My point is that all of the halting problem proofs
    are wrong when they require a Turing machine decider
    H to report on the behavior of machine M on input i
    because machine M is not in the domain of any Turing
    machine decider. Only finite strings such as ⟨M⟩ the
    Turing machine description of machine M are its
    domain.


    No, you just prove that you are too stupid to understand how
    representations work.


    No is it that you are too stupid to understand WHY
    they don't always work.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Sat Jul 26 19:35:30 2025
    From Newsgroup: comp.ai.philosophy

    On 7/26/25 7:08 PM, olcott wrote:
    On 7/26/2025 5:49 PM, olcott wrote:
    On 7/26/2025 2:58 PM, olcott wrote:
    On 7/26/2025 2:52 PM, Mr Flibble wrote:
    On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:

    On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
    In comp.theory olcott <polcott333@gmail.com> wrote:

    The error of all of the halting problem proofs is that they
    require a
    Turing machine halt decider to report on the behavior of a directly >>>>>>> executed Turing machine.

    It is common knowledge that no Turing machine decider can take
    another
    directly executing Turing machine as an input, thus the above
    requirement is not precisely correct.

    When we correct the error of this incorrect requirement it becomes a >>>>>>> Turing machine decider indirectly reports on the behavior of a
    directly executing Turing machine through the proxy of a finite >>>>>>> string
    description of this machine.

    Now I have proven and corrected the error of all of the halting
    problem proofs.

    No you haven't, the subject matter is too far beyond your
    intellectual
    capacity.


    It only seems to you that I lack understanding because you are so sure >>>>> that I must be wrong that you make sure to totally ignore the subtle >>>>> nuances of meaning that proves I am correct.

    No Turing machine based (at least partial) halt decider can possibly >>>>> *directly* report on the behavior of any directly executing Turing
    machine.  The best that any of them can possibly do is indirectly
    report
    on this behavior through the proxy of a finite string machine
    description.

    Partial decidability is not a hard problem.

    /Flibble

    My point is that all of the halting problem proofs
    are wrong when they require a Turing machine decider
    H to report on the behavior of machine M on input i
    because machine M is not in the domain of any Turing
    machine decider. Only finite strings such as ⟨M⟩ the
    Turing machine description of machine M are its
    domain.


    Definition of Turing Machine Ĥ
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
       if Ĥ applied to ⟨Ĥ⟩ halts, and        // incorrect requirement
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
       if Ĥ applied to ⟨Ĥ⟩ does not halt.    // incorrect requirement >>
    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...

    The fact that the correctly simulated input
    specifies recursive simulation prevents the
    simulated ⟨Ĥ⟩ from ever reaching its simulated
    final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.

    This is not contradicted by the fact that
    Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
    the domain of every Turing machine computed function.


    In the atypical case where the behavior of the simulation
    of an input to a potential halt decider disagrees with the
    behavior of the direct execution of the underlying machine
    (because this input calls this same simulating decider) it
    is the behavior of the input that rules because deciders
    compute the mapping for their inputs.


    Nope, just more of your lies.

    The behavior of an input to a halt decider is DEFINED in all cases to be
    the behavior of the machine the input represents, or equivalently, the beahvior of a UTM that is based on the same representation rules as the decider on that exact input.

    The input must be the representation of an actual program, which means
    it includes ALL its code, and thus for H^/D/DD/DDD includes all the code
    of H/HH/HHH, and thus is a different input when built for different
    deciders.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Sat Jul 26 19:42:08 2025
    From Newsgroup: comp.ai.philosophy

    On 7/26/25 7:30 PM, olcott wrote:
    On 7/26/2025 6:18 PM, Richard Damon wrote:
    On 7/26/25 3:58 PM, olcott wrote:
    On 7/26/2025 2:52 PM, Mr Flibble wrote:
    On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:

    On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
    In comp.theory olcott <polcott333@gmail.com> wrote:

    The error of all of the halting problem proofs is that they
    require a
    Turing machine halt decider to report on the behavior of a directly >>>>>>> executed Turing machine.

    It is common knowledge that no Turing machine decider can take
    another
    directly executing Turing machine as an input, thus the above
    requirement is not precisely correct.

    When we correct the error of this incorrect requirement it becomes a >>>>>>> Turing machine decider indirectly reports on the behavior of a
    directly executing Turing machine through the proxy of a finite >>>>>>> string
    description of this machine.

    Now I have proven and corrected the error of all of the halting
    problem proofs.

    No you haven't, the subject matter is too far beyond your
    intellectual
    capacity.


    It only seems to you that I lack understanding because you are so sure >>>>> that I must be wrong that you make sure to totally ignore the subtle >>>>> nuances of meaning that proves I am correct.

    No Turing machine based (at least partial) halt decider can possibly >>>>> *directly* report on the behavior of any directly executing Turing
    machine.  The best that any of them can possibly do is indirectly
    report
    on this behavior through the proxy of a finite string machine
    description.

    Partial decidability is not a hard problem.

    /Flibble

    My point is that all of the halting problem proofs
    are wrong when they require a Turing machine decider
    H to report on the behavior of machine M on input i
    because machine M is not in the domain of any Turing
    machine decider. Only finite strings such as ⟨M⟩ the
    Turing machine description of machine M are its
    domain.


    No, you just prove that you are too stupid to understand how
    representations work.


    No is it that you are too stupid to understand WHY
    they don't always work.


    That just means you defined an incorrect representation.

    That you can build a UTM that uses it is a test for a correct
    representation.

    It seems you logic is based on you reserving the right to just LIE about things.

    If you want to try to show how you can't represent some input, try to
    actually prove it.

    Note, just because one attempt doesn't work (like you idea of omitting
    some of the code) doesn't show that representations don't work, only
    that you were bad at designing an representation.

    You need to show an actual Turing Machine that can't be represented, and
    thus no UTM can produce it, but it CAN be executed.

    This has been proven impossible, but if you don't believe that proof,
    find the counter example.

    Remember, it must be an actual Turing Machineor perhaps an equivalent,
    but that means it includes *ALL* of its code that it uses, and ONLY
    looks at its input.

    Go ahead, try to do that.

    Don't make it be like your last fully encoded Turing Machine that you
    later admitted you lied about, because you didn't know what that meant,
    in effect admitting you admitting that you lie about what you think you
    know.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Sat Jul 26 18:43:57 2025
    From Newsgroup: comp.ai.philosophy

    On 7/26/2025 6:35 PM, Richard Damon wrote:
    On 7/26/25 7:08 PM, olcott wrote:
    On 7/26/2025 5:49 PM, olcott wrote:
    On 7/26/2025 2:58 PM, olcott wrote:
    On 7/26/2025 2:52 PM, Mr Flibble wrote:
    On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:

    On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
    In comp.theory olcott <polcott333@gmail.com> wrote:

    The error of all of the halting problem proofs is that they
    require a
    Turing machine halt decider to report on the behavior of a directly >>>>>>>> executed Turing machine.

    It is common knowledge that no Turing machine decider can take >>>>>>>> another
    directly executing Turing machine as an input, thus the above
    requirement is not precisely correct.

    When we correct the error of this incorrect requirement it
    becomes a
    Turing machine decider indirectly reports on the behavior of a >>>>>>>> directly executing Turing machine through the proxy of a finite >>>>>>>> string
    description of this machine.

    Now I have proven and corrected the error of all of the halting >>>>>>>> problem proofs.

    No you haven't, the subject matter is too far beyond your
    intellectual
    capacity.


    It only seems to you that I lack understanding because you are so >>>>>> sure
    that I must be wrong that you make sure to totally ignore the subtle >>>>>> nuances of meaning that proves I am correct.

    No Turing machine based (at least partial) halt decider can possibly >>>>>> *directly* report on the behavior of any directly executing Turing >>>>>> machine.  The best that any of them can possibly do is indirectly >>>>>> report
    on this behavior through the proxy of a finite string machine
    description.

    Partial decidability is not a hard problem.

    /Flibble

    My point is that all of the halting problem proofs
    are wrong when they require a Turing machine decider
    H to report on the behavior of machine M on input i
    because machine M is not in the domain of any Turing
    machine decider. Only finite strings such as ⟨M⟩ the
    Turing machine description of machine M are its
    domain.


    Definition of Turing Machine Ĥ
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
       if Ĥ applied to ⟨Ĥ⟩ halts, and        // incorrect requirement
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
       if Ĥ applied to ⟨Ĥ⟩ does not halt.    // incorrect requirement

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...

    The fact that the correctly simulated input
    specifies recursive simulation prevents the
    simulated ⟨Ĥ⟩ from ever reaching its simulated
    final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.

    This is not contradicted by the fact that
    Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
    the domain of every Turing machine computed function.


    In the atypical case where the behavior of the simulation
    of an input to a potential halt decider disagrees with the
    behavior of the direct execution of the underlying machine
    (because this input calls this same simulating decider) it
    is the behavior of the input that rules because deciders
    compute the mapping for their inputs.


    Nope, just more of your lies.

    The behavior of an input to a halt decider is DEFINED in all cases to be
    the behavior of the machine the input represents,

    Yet I have conclusively proven otherwise and
    you are too stupid to understand the proof.

    You are so stupid that you think you can get
    away with disagreeing with the x86 language.

    _DDD()
    [00002192] 55 push ebp
    [00002193] 8bec mov ebp,esp
    [00002195] 6892210000 push 00002192 // push DDD
    [0000219a] e833f4ffff call 000015d2 // call HHH
    [0000219f] 83c404 add esp,+04
    [000021a2] 5d pop ebp
    [000021a3] c3 ret
    Size in bytes:(0018) [000021a3]

    DDD simulated by HHH according to the rules of the
    x86 language does not fucking halt you fucking moron.
    If any definition says otherwise then this definition
    is fucked up.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Sat Jul 26 21:30:26 2025
    From Newsgroup: comp.ai.philosophy

    On 7/26/25 7:43 PM, olcott wrote:
    On 7/26/2025 6:35 PM, Richard Damon wrote:
    On 7/26/25 7:08 PM, olcott wrote:
    On 7/26/2025 5:49 PM, olcott wrote:
    On 7/26/2025 2:58 PM, olcott wrote:
    On 7/26/2025 2:52 PM, Mr Flibble wrote:
    On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:

    On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
    In comp.theory olcott <polcott333@gmail.com> wrote:

    The error of all of the halting problem proofs is that they >>>>>>>>> require a
    Turing machine halt decider to report on the behavior of a
    directly
    executed Turing machine.

    It is common knowledge that no Turing machine decider can take >>>>>>>>> another
    directly executing Turing machine as an input, thus the above >>>>>>>>> requirement is not precisely correct.

    When we correct the error of this incorrect requirement it
    becomes a
    Turing machine decider indirectly reports on the behavior of a >>>>>>>>> directly executing Turing machine through the proxy of a finite >>>>>>>>> string
    description of this machine.

    Now I have proven and corrected the error of all of the halting >>>>>>>>> problem proofs.

    No you haven't, the subject matter is too far beyond your
    intellectual
    capacity.


    It only seems to you that I lack understanding because you are so >>>>>>> sure
    that I must be wrong that you make sure to totally ignore the subtle >>>>>>> nuances of meaning that proves I am correct.

    No Turing machine based (at least partial) halt decider can possibly >>>>>>> *directly* report on the behavior of any directly executing Turing >>>>>>> machine.  The best that any of them can possibly do is indirectly >>>>>>> report
    on this behavior through the proxy of a finite string machine
    description.

    Partial decidability is not a hard problem.

    /Flibble

    My point is that all of the halting problem proofs
    are wrong when they require a Turing machine decider
    H to report on the behavior of machine M on input i
    because machine M is not in the domain of any Turing
    machine decider. Only finite strings such as ⟨M⟩ the
    Turing machine description of machine M are its
    domain.


    Definition of Turing Machine Ĥ
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
       if Ĥ applied to ⟨Ĥ⟩ halts, and        // incorrect requirement
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
       if Ĥ applied to ⟨Ĥ⟩ does not halt.    // incorrect requirement

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...

    The fact that the correctly simulated input
    specifies recursive simulation prevents the
    simulated ⟨Ĥ⟩ from ever reaching its simulated
    final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.

    This is not contradicted by the fact that
    Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
    the domain of every Turing machine computed function.


    In the atypical case where the behavior of the simulation
    of an input to a potential halt decider disagrees with the
    behavior of the direct execution of the underlying machine
    (because this input calls this same simulating decider) it
    is the behavior of the input that rules because deciders
    compute the mapping for their inputs.


    Nope, just more of your lies.

    The behavior of an input to a halt decider is DEFINED in all cases to
    be the behavior of the machine the input represents,

    Yet I have conclusively proven otherwise and
    you are too stupid to understand the proof.

    No, because you proof needs to call different inputs the same or partial simulaiton to be correct.

    YOu need to LIE and say that the "behavior of the input" is something
    other than its DEFINITION, and thus you claim is just a lie.


    You are so stupid that you think you can get
    away with disagreeing with the x86 language.

    Like the fact that you can't simulate the below without including the
    code of HHH?


    _DDD()
    [00002192] 55         push ebp
    [00002193] 8bec       mov ebp,esp
    [00002195] 6892210000 push 00002192  // push DDD
    [0000219a] e833f4ffff call 000015d2  // call HHH
    [0000219f] 83c404     add esp,+04
    [000021a2] 5d         pop ebp
    [000021a3] c3         ret
    Size in bytes:(0018) [000021a3]

    DDD simulated by HHH according to the rules of the
    x86 language does not fucking halt you fucking moron.
    If any definition says otherwise then this definition
    is fucked up.


    Only because your HHH doesn't simulate its input per the definition of
    the dx86 language but only PARTIALLY simulates it, stoping before it
    reaches that final state.

    The CORRECT simulation of the input sees that the simulated HHH will
    abort its simulation and return 0, and thus the correct simulation halts.

    Go ahead, tell your surgeon he only needs to do the first half of the operation that you need to live, as that is correct enough, or take only
    the first half of your treatments.

    All you have done is prove that you lie.

    You don't know what an input is.

    You don't know the definition of the "behavior of the input" is.

    You don't know what correct is.

    You don't know the rules of the x86 processor, in particular, that it is defined to not stop except for a final instruction.

    In other words, you don't know the meaning of many of the words you use,
    but are using lies based on fabricated meanings.

    Sorry, until you show a reliable source for some of your claims, all you
    are doing is showing that your concept of "logic" is making up stuff and claiming it must be true,

    Which is just worse than the "lies' you claim to be trying to fight.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Sat Jul 26 21:43:12 2025
    From Newsgroup: comp.ai.philosophy

    On 7/26/2025 8:30 PM, Richard Damon wrote:
    On 7/26/25 7:43 PM, olcott wrote:
    On 7/26/2025 6:35 PM, Richard Damon wrote:
    On 7/26/25 7:08 PM, olcott wrote:
    On 7/26/2025 5:49 PM, olcott wrote:
    On 7/26/2025 2:58 PM, olcott wrote:
    On 7/26/2025 2:52 PM, Mr Flibble wrote:
    On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:

    On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
    In comp.theory olcott <polcott333@gmail.com> wrote:

    The error of all of the halting problem proofs is that they >>>>>>>>>> require a
    Turing machine halt decider to report on the behavior of a >>>>>>>>>> directly
    executed Turing machine.

    It is common knowledge that no Turing machine decider can take >>>>>>>>>> another
    directly executing Turing machine as an input, thus the above >>>>>>>>>> requirement is not precisely correct.

    When we correct the error of this incorrect requirement it >>>>>>>>>> becomes a
    Turing machine decider indirectly reports on the behavior of a >>>>>>>>>> directly executing Turing machine through the proxy of a
    finite string
    description of this machine.

    Now I have proven and corrected the error of all of the halting >>>>>>>>>> problem proofs.

    No you haven't, the subject matter is too far beyond your
    intellectual
    capacity.


    It only seems to you that I lack understanding because you are >>>>>>>> so sure
    that I must be wrong that you make sure to totally ignore the >>>>>>>> subtle
    nuances of meaning that proves I am correct.

    No Turing machine based (at least partial) halt decider can
    possibly
    *directly* report on the behavior of any directly executing Turing >>>>>>>> machine.  The best that any of them can possibly do is
    indirectly report
    on this behavior through the proxy of a finite string machine
    description.

    Partial decidability is not a hard problem.

    /Flibble

    My point is that all of the halting problem proofs
    are wrong when they require a Turing machine decider
    H to report on the behavior of machine M on input i
    because machine M is not in the domain of any Turing
    machine decider. Only finite strings such as ⟨M⟩ the
    Turing machine description of machine M are its
    domain.


    Definition of Turing Machine Ĥ
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
       if Ĥ applied to ⟨Ĥ⟩ halts, and        // incorrect requirement
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
       if Ĥ applied to ⟨Ĥ⟩ does not halt.    // incorrect requirement

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ >>>>> (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...

    The fact that the correctly simulated input
    specifies recursive simulation prevents the
    simulated ⟨Ĥ⟩ from ever reaching its simulated
    final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.

    This is not contradicted by the fact that
    Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
    the domain of every Turing machine computed function.


    In the atypical case where the behavior of the simulation
    of an input to a potential halt decider disagrees with the
    behavior of the direct execution of the underlying machine
    (because this input calls this same simulating decider) it
    is the behavior of the input that rules because deciders
    compute the mapping for their inputs.


    Nope, just more of your lies.

    The behavior of an input to a halt decider is DEFINED in all cases to
    be the behavior of the machine the input represents,

    Yet I have conclusively proven otherwise and
    you are too stupid to understand the proof.

    No, because you proof needs to call different inputs the same or partial simulaiton to be correct.


    When HHH(DDD) simulates DDD it also simulates itself
    simulating DDD because DDD calls HHH(DDD).

    When HHH1(DDD) simulates DDD DOES NOT simulate itself
    simulating DDD because DDD DOES NOT CALL HHH1(DDD).

    For three fucking years everyone here pretended that
    they could NOT fucking see that.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From wij@wyniijj5@gmail.com to comp.ai.philosophy on Sun Jul 27 11:08:20 2025
    From Newsgroup: comp.ai.philosophy

    On Sat, 2025-07-26 at 21:43 -0500, olcott wrote:
    On 7/26/2025 8:30 PM, Richard Damon wrote:
    On 7/26/25 7:43 PM, olcott wrote:
    On 7/26/2025 6:35 PM, Richard Damon wrote:
    On 7/26/25 7:08 PM, olcott wrote:
    On 7/26/2025 5:49 PM, olcott wrote:
    On 7/26/2025 2:58 PM, olcott wrote:
    On 7/26/2025 2:52 PM, Mr Flibble wrote:
    On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:

    On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
    In comp.theory olcott <polcott333@gmail.com> wrote:

    The error of all of the halting problem proofs is that they
    require a
    Turing machine halt decider to report on the behavior of a
    directly
    executed Turing machine.

    It is common knowledge that no Turing machine decider can take
    another
    directly executing Turing machine as an input, thus the above
    requirement is not precisely correct.

    When we correct the error of this incorrect requirement it
    becomes a
    Turing machine decider indirectly reports on the behavior of a
    directly executing Turing machine through the proxy of a finite string
    description of this machine.

    Now I have proven and corrected the error of all of the halting
    problem proofs.

    No you haven't, the subject matter is too far beyond your intellectual
    capacity.


    It only seems to you that I lack understanding because you are
    so sure
    that I must be wrong that you make sure to totally ignore the
    subtle
    nuances of meaning that proves I am correct.

    No Turing machine based (at least partial) halt decider can possibly
    *directly* report on the behavior of any directly executing Turing
    machine.  The best that any of them can possibly do is indirectly report
    on this behavior through the proxy of a finite string machine description.

    Partial decidability is not a hard problem.

    /Flibble

    My point is that all of the halting problem proofs
    are wrong when they require a Turing machine decider
    H to report on the behavior of machine M on input i
    because machine M is not in the domain of any Turing
    machine decider. Only finite strings such as ⟨M⟩ the
    Turing machine description of machine M are its
    domain.


    Definition of Turing Machine Ĥ
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,    if Ĥ applied to ⟨Ĥ⟩ halts, and        // incorrect requirement
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
       if Ĥ applied to ⟨Ĥ⟩ does not halt.    // incorrect requirement

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...

    The fact that the correctly simulated input
    specifies recursive simulation prevents the
    simulated ⟨Ĥ⟩ from ever reaching its simulated
    final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.

    This is not contradicted by the fact that
    Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
    the domain of every Turing machine computed function.


    In the atypical case where the behavior of the simulation
    of an input to a potential halt decider disagrees with the
    behavior of the direct execution of the underlying machine
    (because this input calls this same simulating decider) it
    is the behavior of the input that rules because deciders
    compute the mapping for their inputs.


    Nope, just more of your lies.

    The behavior of an input to a halt decider is DEFINED in all cases to be the behavior of the machine the input represents,

    Yet I have conclusively proven otherwise and
    you are too stupid to understand the proof.

    No, because you proof needs to call different inputs the same or partial simulaiton to be correct.


    When HHH(DDD) simulates DDD it also simulates itself
    simulating DDD because DDD calls HHH(DDD).

    When HHH1(DDD) simulates DDD DOES NOT simulate itself
    simulating DDD because DDD DOES NOT CALL HHH1(DDD).

    For three fucking years everyone here pretended that
    they could NOT fucking see that.
    It is you who proved yourself an idiot, worse, a liar, EVERYDAY.
    olcott's claim form H(D)=0 is correct, H(D)=1 is correct, both are correct... 'I' was talking about HH,HH2,HHH, DD,DDD,...not H(D)!! ... numerous.
    And recently, 'I' was not refuting HP. HP is correct. 'I' was refuting Linz's proof, and HHH(DD)=1 is correct!! (Undecidable and HHH(DD)=1 are both correct!) A couple days before, you have shown again you don't understand basic logic (AND,IF,...)
    You cannot construct a TM that compute the length of its input.
    Your understanding of C/Assembly is shown very low, I never saw anyone is lower.
    No one in internet I ever saw is lower than yours. Keep blind yourself, 'genius'.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Sun Jul 27 07:11:13 2025
    From Newsgroup: comp.ai.philosophy

    On 7/26/25 10:43 PM, olcott wrote:
    On 7/26/2025 8:30 PM, Richard Damon wrote:
    On 7/26/25 7:43 PM, olcott wrote:
    On 7/26/2025 6:35 PM, Richard Damon wrote:
    On 7/26/25 7:08 PM, olcott wrote:
    On 7/26/2025 5:49 PM, olcott wrote:
    On 7/26/2025 2:58 PM, olcott wrote:
    On 7/26/2025 2:52 PM, Mr Flibble wrote:
    On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:

    On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
    In comp.theory olcott <polcott333@gmail.com> wrote:

    The error of all of the halting problem proofs is that they >>>>>>>>>>> require a
    Turing machine halt decider to report on the behavior of a >>>>>>>>>>> directly
    executed Turing machine.

    It is common knowledge that no Turing machine decider can >>>>>>>>>>> take another
    directly executing Turing machine as an input, thus the above >>>>>>>>>>> requirement is not precisely correct.

    When we correct the error of this incorrect requirement it >>>>>>>>>>> becomes a
    Turing machine decider indirectly reports on the behavior of a >>>>>>>>>>> directly executing Turing machine through the proxy of a >>>>>>>>>>> finite string
    description of this machine.

    Now I have proven and corrected the error of all of the halting >>>>>>>>>>> problem proofs.

    No you haven't, the subject matter is too far beyond your >>>>>>>>>> intellectual
    capacity.


    It only seems to you that I lack understanding because you are >>>>>>>>> so sure
    that I must be wrong that you make sure to totally ignore the >>>>>>>>> subtle
    nuances of meaning that proves I am correct.

    No Turing machine based (at least partial) halt decider can >>>>>>>>> possibly
    *directly* report on the behavior of any directly executing Turing >>>>>>>>> machine.  The best that any of them can possibly do is
    indirectly report
    on this behavior through the proxy of a finite string machine >>>>>>>>> description.

    Partial decidability is not a hard problem.

    /Flibble

    My point is that all of the halting problem proofs
    are wrong when they require a Turing machine decider
    H to report on the behavior of machine M on input i
    because machine M is not in the domain of any Turing
    machine decider. Only finite strings such as ⟨M⟩ the
    Turing machine description of machine M are its
    domain.


    Definition of Turing Machine Ĥ
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
       if Ĥ applied to ⟨Ĥ⟩ halts, and        // incorrect requirement
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
       if Ĥ applied to ⟨Ĥ⟩ does not halt.    // incorrect requirement

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ >>>>>> (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...

    The fact that the correctly simulated input
    specifies recursive simulation prevents the
    simulated ⟨Ĥ⟩ from ever reaching its simulated
    final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.

    This is not contradicted by the fact that
    Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
    the domain of every Turing machine computed function.


    In the atypical case where the behavior of the simulation
    of an input to a potential halt decider disagrees with the
    behavior of the direct execution of the underlying machine
    (because this input calls this same simulating decider) it
    is the behavior of the input that rules because deciders
    compute the mapping for their inputs.


    Nope, just more of your lies.

    The behavior of an input to a halt decider is DEFINED in all cases
    to be the behavior of the machine the input represents,

    Yet I have conclusively proven otherwise and
    you are too stupid to understand the proof.

    No, because you proof needs to call different inputs the same or
    partial simulaiton to be correct.


    When HHH(DDD) simulates DDD it also simulates itself
    simulating DDD because DDD calls HHH(DDD).

    But can only do that if HHH is part of its input, or it is not
    simulating its input.

    And, it FAILS at simulating itself, as it concludes that HHH(DDD) will
    never return, when it does.

    Note, "itself"-ness doesn't come into it. None of the instructions
    simulated behave differently because HHH is simulsting them instead of something else.

    x86 instructions don't have this sort of "global state".


    When HHH1(DDD) simulates DDD DOES NOT simulate itself
    simulating DDD because DDD DOES NOT CALL HHH1(DDD).

    Which makes no difference to the execution of the code.

    BOTH are just simulating the code of HHH(DDD), and both should get the
    same answer.

    Now, if you are just admitting that the code for HHH is just impure, and
    looks at some global/static variable that comunicates that relationship,
    then you are just admitting that you have lied about them being Turing Equivalents to the problem, as Turing Machines can't do that, as it is disallowed in the definition of a Computation, which is what a program
    is Computability Theory is restricted to be.


    For three fucking years everyone here pretended that
    they could NOT fucking see that.


    No, for three fucking years you have shown that you don't understand
    that this doesn't make a damned bit of difference to the code, at least
    not if you haven't lied about the code (which has been pointed out, you
    have.)

    Sorry, all you are doing it proving that you are just a pathetic
    pathological liar that has no idea whst he is talking about and either
    doesn't care to learn, or is incapable to learn, and there really isn't
    much difference between the two.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Sun Jul 27 08:50:54 2025
    From Newsgroup: comp.ai.philosophy

    On 7/27/2025 6:11 AM, Richard Damon wrote:
    On 7/26/25 10:43 PM, olcott wrote:>>
    When HHH(DDD) simulates DDD it also simulates itself
    simulating DDD because DDD calls HHH(DDD).

    But can only do that if HHH is part of its input, or it is not
    simulating its input.

    And, it FAILS at simulating itself, as it concludes that HHH(DDD) will
    never return, when it does.


    This ChatGPT analysis of its input below
    correctly derives both of our views. I did
    not bias this analysis by telling ChatGPT
    what I expect to see.

    typedef void (*ptr)();
    int HHH(ptr P);

    void DDD()
    {
    HHH(DDD);
    return;
    }

    int main()
    {
    HHH(DDD);
    DDD();
    }

    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) It detects a non-terminating behavior pattern then it aborts its simulation and returns 0,
    (b) Its simulated input reaches its simulated "return" statement then it returns 1.

    https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Sun Jul 27 15:58:12 2025
    From Newsgroup: comp.ai.philosophy

    On 7/27/25 9:50 AM, olcott wrote:
    On 7/27/2025 6:11 AM, Richard Damon wrote:
    On 7/26/25 10:43 PM, olcott wrote:>>
    When HHH(DDD) simulates DDD it also simulates itself
    simulating DDD because DDD calls HHH(DDD).

    But can only do that if HHH is part of its input, or it is not
    simulating its input.

    And, it FAILS at simulating itself, as it concludes that HHH(DDD) will
    never return, when it does.


    This ChatGPT analysis of its input below
    correctly derives both of our views. I did
    not bias this analysis by telling ChatGPT
    what I expect to see.

    typedef void (*ptr)();
    int HHH(ptr P);

    void DDD()
    {
      HHH(DDD);
      return;
    }

    int main()
    {
      HHH(DDD);
      DDD();
    }

    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) It detects a non-terminating behavior pattern then it aborts its simulation and returns 0,
    (b) Its simulated input reaches its simulated "return" statement then it returns 1.

    https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c


    Just proves that you have contaminated the learning with false idea
    about programs.

    Note, you are just proving that you don't understand what truth is.

    I guess you can claim credit for teaching the AI to be stupid..


    Note, its answers are contradictory, as it gives calls to HHH two
    different behaviors.

    Note, just starting new sessions doesn't necessarily totally remove the
    affect of previous sessions.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Sun Jul 27 15:28:38 2025
    From Newsgroup: comp.ai.philosophy

    On 7/27/2025 2:58 PM, Richard Damon wrote:
    On 7/27/25 9:50 AM, olcott wrote:
    On 7/27/2025 6:11 AM, Richard Damon wrote:
    On 7/26/25 10:43 PM, olcott wrote:>>
    When HHH(DDD) simulates DDD it also simulates itself
    simulating DDD because DDD calls HHH(DDD).

    But can only do that if HHH is part of its input, or it is not
    simulating its input.

    And, it FAILS at simulating itself, as it concludes that HHH(DDD)
    will never return, when it does.


    This ChatGPT analysis of its input below
    correctly derives both of our views. I did
    not bias this analysis by telling ChatGPT
    what I expect to see.

    typedef void (*ptr)();
    int HHH(ptr P);

    void DDD()
    {
       HHH(DDD);
       return;
    }

    int main()
    {
       HHH(DDD);
       DDD();
    }

    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) It detects a non-terminating behavior pattern then it aborts its
    simulation and returns 0,
    (b) Its simulated input reaches its simulated "return" statement then
    it returns 1.

    https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c


    Just proves that you have contaminated the learning with false idea
    about programs.


    I made sure that ChatGPT isolates this conversation
    from everything else that I ever said. Besides telling
    ChatGPT about the possibility of a simulating termination
    analyzer (that I have proved does work on some inputs)
    it figured out all the rest on its own without any
    prompting from me.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Sun Jul 27 17:31:08 2025
    From Newsgroup: comp.ai.philosophy

    On 7/27/25 4:28 PM, olcott wrote:
    On 7/27/2025 2:58 PM, Richard Damon wrote:
    On 7/27/25 9:50 AM, olcott wrote:
    On 7/27/2025 6:11 AM, Richard Damon wrote:
    On 7/26/25 10:43 PM, olcott wrote:>>
    When HHH(DDD) simulates DDD it also simulates itself
    simulating DDD because DDD calls HHH(DDD).

    But can only do that if HHH is part of its input, or it is not
    simulating its input.

    And, it FAILS at simulating itself, as it concludes that HHH(DDD)
    will never return, when it does.


    This ChatGPT analysis of its input below
    correctly derives both of our views. I did
    not bias this analysis by telling ChatGPT
    what I expect to see.

    typedef void (*ptr)();
    int HHH(ptr P);

    void DDD()
    {
       HHH(DDD);
       return;
    }

    int main()
    {
       HHH(DDD);
       DDD();
    }

    Simulating Termination Analyzer HHH correctly simulates its input until: >>> (a) It detects a non-terminating behavior pattern then it aborts its
    simulation and returns 0,
    (b) Its simulated input reaches its simulated "return" statement then
    it returns 1.

    https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c


    Just proves that you have contaminated the learning with false idea
    about programs.


    I made sure that ChatGPT isolates this conversation
    from everything else that I ever said. Besides telling
    ChatGPT about the possibility of a simulating termination
    analyzer (that I have proved does work on some inputs)
    it figured out all the rest on its own without any
    prompting from me.


    You CAN'T totally isolate it. You can tell it to not use what you have
    told it previously (which you did not do), but anything said to the AI,
    has a chance of being recorded and used for future training.

    Just think, you might be the one responsible for providing the lies that future AIs have decided to accept ruining the chance of some future breakthrough.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Sun Jul 27 16:46:24 2025
    From Newsgroup: comp.ai.philosophy

    On 7/27/2025 4:31 PM, Richard Damon wrote:
    On 7/27/25 4:28 PM, olcott wrote:
    On 7/27/2025 2:58 PM, Richard Damon wrote:
    On 7/27/25 9:50 AM, olcott wrote:
    On 7/27/2025 6:11 AM, Richard Damon wrote:
    On 7/26/25 10:43 PM, olcott wrote:>>
    When HHH(DDD) simulates DDD it also simulates itself
    simulating DDD because DDD calls HHH(DDD).

    But can only do that if HHH is part of its input, or it is not
    simulating its input.

    And, it FAILS at simulating itself, as it concludes that HHH(DDD)
    will never return, when it does.


    This ChatGPT analysis of its input below
    correctly derives both of our views. I did
    not bias this analysis by telling ChatGPT
    what I expect to see.

    typedef void (*ptr)();
    int HHH(ptr P);

    void DDD()
    {
       HHH(DDD);
       return;
    }

    int main()
    {
       HHH(DDD);
       DDD();
    }

    Simulating Termination Analyzer HHH correctly simulates its input
    until:
    (a) It detects a non-terminating behavior pattern then it aborts its
    simulation and returns 0,
    (b) Its simulated input reaches its simulated "return" statement
    then it returns 1.

    https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c


    Just proves that you have contaminated the learning with false idea
    about programs.


    I made sure that ChatGPT isolates this conversation
    from everything else that I ever said. Besides telling
    ChatGPT about the possibility of a simulating termination
    analyzer (that I have proved does work on some inputs)
    it figured out all the rest on its own without any
    prompting from me.


    You CAN'T totally isolate it. You can tell it to not use what you have
    told it previously (which you did not do),

    ChatGPT remember prior conversations
    is turned off

    My Account
    Settings
    Personalization
    Memory
    Reference saved memories
    This is important because I need to know the
    minimum basis that it needs to understand what
    I said so that I can know that I have no gaps
    in my reasoning.

    but anything said to the AI,
    has a chance of being recorded and used for future training.


    During periodic updates.

    Just think, you might be the one responsible for providing the lies that future AIs have decided to accept ruining the chance of some future breakthrough.

    The above input that I provided has zero falsehoods.
    ChatGPT figured out all of the reasoning from that
    basis.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Sun Jul 27 20:07:39 2025
    From Newsgroup: comp.ai.philosophy

    On 7/27/25 5:46 PM, olcott wrote:
    On 7/27/2025 4:31 PM, Richard Damon wrote:
    On 7/27/25 4:28 PM, olcott wrote:
    On 7/27/2025 2:58 PM, Richard Damon wrote:
    On 7/27/25 9:50 AM, olcott wrote:
    On 7/27/2025 6:11 AM, Richard Damon wrote:
    On 7/26/25 10:43 PM, olcott wrote:>>
    When HHH(DDD) simulates DDD it also simulates itself
    simulating DDD because DDD calls HHH(DDD).

    But can only do that if HHH is part of its input, or it is not
    simulating its input.

    And, it FAILS at simulating itself, as it concludes that HHH(DDD) >>>>>> will never return, when it does.


    This ChatGPT analysis of its input below
    correctly derives both of our views. I did
    not bias this analysis by telling ChatGPT
    what I expect to see.

    typedef void (*ptr)();
    int HHH(ptr P);

    void DDD()
    {
       HHH(DDD);
       return;
    }

    int main()
    {
       HHH(DDD);
       DDD();
    }

    Simulating Termination Analyzer HHH correctly simulates its input
    until:
    (a) It detects a non-terminating behavior pattern then it aborts
    its simulation and returns 0,
    (b) Its simulated input reaches its simulated "return" statement
    then it returns 1.

    https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c


    Just proves that you have contaminated the learning with false idea
    about programs.


    I made sure that ChatGPT isolates this conversation
    from everything else that I ever said. Besides telling
    ChatGPT about the possibility of a simulating termination
    analyzer (that I have proved does work on some inputs)
    it figured out all the rest on its own without any
    prompting from me.


    You CAN'T totally isolate it. You can tell it to not use what you have
    told it previously (which you did not do),

    ChatGPT remember prior conversations
    is turned off

    My Account
     Settings
      Personalization
       Memory
        Reference saved memories
    This is important because I need to know the
    minimum basis that it needs to understand what
    I said so that I can know that I have no gaps
    in my reasoning.

    But that setting isn't perfect.


    but anything said to the AI, has a chance of being recorded and used
    for future training.


    During periodic updates.

    And you have been posting your lies on usenet, which is a source of
    training, for awhile.


    Just think, you might be the one responsible for providing the lies
    that future AIs have decided to accept ruining the chance of some
    future breakthrough.

    The above input that I provided has zero falsehoods.
    ChatGPT figured out all of the reasoning from that
    basis.


    But. not full definitions, like the fact that a given program on a given
    input will always do the same thing.

    Or that the program DDD being simulated includes the exact code of the
    HHH it was built on which is the one deciding on it.

    And, AI is fully capable of deducting error even when given facts.

    Ultimately, the problem is that just because an AI agrees with you,
    doesn't make you statement true.

    Since the description makes HHH fail to obey the basic property that a program/function will always act the same given the same input, and that correct simulation BY DEFINITION, to be correct, must demonstart the
    same behavior as the execution of the input, just proves that it is in
    error.

    That you yourself believe these errors, just shows that you don't
    understand the basic rules and definitions of the system, and thus your
    ideas are worthless.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Sun Jul 27 19:20:39 2025
    From Newsgroup: comp.ai.philosophy

    On 7/27/2025 7:07 PM, Richard Damon wrote:
    On 7/27/25 5:46 PM, olcott wrote:
    On 7/27/2025 4:31 PM, Richard Damon wrote:
    On 7/27/25 4:28 PM, olcott wrote:
    On 7/27/2025 2:58 PM, Richard Damon wrote:
    On 7/27/25 9:50 AM, olcott wrote:
    On 7/27/2025 6:11 AM, Richard Damon wrote:
    On 7/26/25 10:43 PM, olcott wrote:>>
    When HHH(DDD) simulates DDD it also simulates itself
    simulating DDD because DDD calls HHH(DDD).

    But can only do that if HHH is part of its input, or it is not
    simulating its input.

    And, it FAILS at simulating itself, as it concludes that HHH(DDD) >>>>>>> will never return, when it does.


    This ChatGPT analysis of its input below
    correctly derives both of our views. I did
    not bias this analysis by telling ChatGPT
    what I expect to see.

    typedef void (*ptr)();
    int HHH(ptr P);

    void DDD()
    {
       HHH(DDD);
       return;
    }

    int main()
    {
       HHH(DDD);
       DDD();
    }

    Simulating Termination Analyzer HHH correctly simulates its input >>>>>> until:
    (a) It detects a non-terminating behavior pattern then it aborts
    its simulation and returns 0,
    (b) Its simulated input reaches its simulated "return" statement
    then it returns 1.

    https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c


    Just proves that you have contaminated the learning with false idea >>>>> about programs.


    I made sure that ChatGPT isolates this conversation
    from everything else that I ever said. Besides telling
    ChatGPT about the possibility of a simulating termination
    analyzer (that I have proved does work on some inputs)
    it figured out all the rest on its own without any
    prompting from me.


    You CAN'T totally isolate it. You can tell it to not use what you
    have told it previously (which you did not do),

    ChatGPT remember prior conversations
    is turned off

    My Account
      Settings
       Personalization
        Memory
         Reference saved memories
    This is important because I need to know the
    minimum basis that it needs to understand what
    I said so that I can know that I have no gaps
    in my reasoning.

    But that setting isn't perfect.


    but anything said to the AI, has a chance of being recorded and used
    for future training.


    During periodic updates.

    And you have been posting your lies on usenet, which is a source of training, for awhile.


    Just think, you might be the one responsible for providing the lies
    that future AIs have decided to accept ruining the chance of some
    future breakthrough.

    The above input that I provided has zero falsehoods.
    ChatGPT figured out all of the reasoning from that
    basis.


    But. not full definitions, like the fact that a given program on a given input will always do the same thing.


    When DDD is emulated by HHH it must emulate
    DDD calling itself in recursive emulation.

    When DDD is emulated by HHH1 it need not emulate
    itself at all.

    Everyone here has pretended to be to fucking stupid
    to see that for three fucking years thus providing
    sufficient evidence that they are all damned liars.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Sun Jul 27 22:48:39 2025
    From Newsgroup: comp.ai.philosophy

    On 7/27/25 8:20 PM, olcott wrote:
    On 7/27/2025 7:07 PM, Richard Damon wrote:
    On 7/27/25 5:46 PM, olcott wrote:
    On 7/27/2025 4:31 PM, Richard Damon wrote:
    On 7/27/25 4:28 PM, olcott wrote:
    On 7/27/2025 2:58 PM, Richard Damon wrote:
    On 7/27/25 9:50 AM, olcott wrote:
    On 7/27/2025 6:11 AM, Richard Damon wrote:
    On 7/26/25 10:43 PM, olcott wrote:>>
    When HHH(DDD) simulates DDD it also simulates itself
    simulating DDD because DDD calls HHH(DDD).

    But can only do that if HHH is part of its input, or it is not >>>>>>>> simulating its input.

    And, it FAILS at simulating itself, as it concludes that
    HHH(DDD) will never return, when it does.


    This ChatGPT analysis of its input below
    correctly derives both of our views. I did
    not bias this analysis by telling ChatGPT
    what I expect to see.

    typedef void (*ptr)();
    int HHH(ptr P);

    void DDD()
    {
       HHH(DDD);
       return;
    }

    int main()
    {
       HHH(DDD);
       DDD();
    }

    Simulating Termination Analyzer HHH correctly simulates its input >>>>>>> until:
    (a) It detects a non-terminating behavior pattern then it aborts >>>>>>> its simulation and returns 0,
    (b) Its simulated input reaches its simulated "return" statement >>>>>>> then it returns 1.

    https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c


    Just proves that you have contaminated the learning with false
    idea about programs.


    I made sure that ChatGPT isolates this conversation
    from everything else that I ever said. Besides telling
    ChatGPT about the possibility of a simulating termination
    analyzer (that I have proved does work on some inputs)
    it figured out all the rest on its own without any
    prompting from me.


    You CAN'T totally isolate it. You can tell it to not use what you
    have told it previously (which you did not do),

    ChatGPT remember prior conversations
    is turned off

    My Account
      Settings
       Personalization
        Memory
         Reference saved memories
    This is important because I need to know the
    minimum basis that it needs to understand what
    I said so that I can know that I have no gaps
    in my reasoning.

    But that setting isn't perfect.


    but anything said to the AI, has a chance of being recorded and used
    for future training.


    During periodic updates.

    And you have been posting your lies on usenet, which is a source of
    training, for awhile.


    Just think, you might be the one responsible for providing the lies
    that future AIs have decided to accept ruining the chance of some
    future breakthrough.

    The above input that I provided has zero falsehoods.
    ChatGPT figured out all of the reasoning from that
    basis.


    But. not full definitions, like the fact that a given program on a
    given input will always do the same thing.


    When DDD is emulated by HHH it must emulate
    DDD calling itself in recursive emulation.

    When DDD is emulated by HHH1 it need not emulate
    itself at all.

    But "itself" doesn't matter to x86 instructions, as it isn't part of the context that they execute in, both simulation start with an identical
    local context, which is all that matters (the contents of the CPU
    registers, and the memory that the operation will access).

    That terms is just a category error, and shows you don't know what you
    are talking about.

    What 8x86 instruction, properly simulated per the x86 language does
    something different in those two cases.

    Your failure to answer thst question just proves thst you are just
    blantantly lying about that, and are just ignorant of what you are talking.

    If you want to make that claim, PROVE that it makes a difference by
    naming the instruciton.

    Without that, you are just admitting to the whole world that all you do
    is just lie, and make up the rules of your insane world.


    Everyone here has pretended to be to fucking stupid
    to see that for three fucking years thus providing
    sufficient evidence that they are all damned liars.


    No, you are just proving that you are fucking stupkd and a ignorant and idiotic pathetic pathological liar that doesn't know what he is talking
    abpout and just doesn't care.

    This path appears to have doomed you to eternal stupidity, because you
    have stripped yourself of the ability to think by your own
    self-brainwashing.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From wij@wyniijj5@gmail.com to comp.ai.philosophy on Mon Jul 28 10:52:53 2025
    From Newsgroup: comp.ai.philosophy

    On Sun, 2025-07-27 at 19:20 -0500, olcott wrote:
    On 7/27/2025 7:07 PM, Richard Damon wrote:
    On 7/27/25 5:46 PM, olcott wrote:
    On 7/27/2025 4:31 PM, Richard Damon wrote:
    On 7/27/25 4:28 PM, olcott wrote:
    On 7/27/2025 2:58 PM, Richard Damon wrote:
    On 7/27/25 9:50 AM, olcott wrote:
    On 7/27/2025 6:11 AM, Richard Damon wrote:
    On 7/26/25 10:43 PM, olcott wrote:>>
    When HHH(DDD) simulates DDD it also simulates itself simulating DDD because DDD calls HHH(DDD).

    But can only do that if HHH is part of its input, or it is not simulating its input.

    And, it FAILS at simulating itself, as it concludes that HHH(DDD)
    will never return, when it does.


    This ChatGPT analysis of its input below
    correctly derives both of our views. I did
    not bias this analysis by telling ChatGPT
    what I expect to see.

    typedef void (*ptr)();
    int HHH(ptr P);

    void DDD()
    {
       HHH(DDD);
       return;
    }

    int main()
    {
       HHH(DDD);
       DDD();
    }

    Simulating Termination Analyzer HHH correctly simulates its input
    until:
    (a) It detects a non-terminating behavior pattern then it aborts its simulation and returns 0,
    (b) Its simulated input reaches its simulated "return" statement then it returns 1.

    https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c


    Just proves that you have contaminated the learning with false idea
    about programs.


    I made sure that ChatGPT isolates this conversation
    from everything else that I ever said. Besides telling
    ChatGPT about the possibility of a simulating termination
    analyzer (that I have proved does work on some inputs)
    it figured out all the rest on its own without any
    prompting from me.


    You CAN'T totally isolate it. You can tell it to not use what you
    have told it previously (which you did not do),

    ChatGPT remember prior conversations
    is turned off

    My Account
      Settings
       Personalization
        Memory
         Reference saved memories
    This is important because I need to know the
    minimum basis that it needs to understand what
    I said so that I can know that I have no gaps
    in my reasoning.

    But that setting isn't perfect.


    but anything said to the AI, has a chance of being recorded and used for future training.


    During periodic updates.

    And you have been posting your lies on usenet, which is a source of training, for awhile.


    Just think, you might be the one responsible for providing the lies that future AIs have decided to accept ruining the chance of some future breakthrough.

    The above input that I provided has zero falsehoods.
    ChatGPT figured out all of the reasoning from that
    basis.


    But. not full definitions, like the fact that a given program on a given input will always do the same thing.


    When DDD is emulated by HHH it must emulate
    DDD calling itself in recursive emulation.

    When DDD is emulated by HHH1 it need not emulate
    itself at all.

    Everyone here has pretended to be to fucking stupid
    to see that for three fucking years thus providing
    sufficient evidence that they are all damned liars.
    olcott said: "You have not used any reasoning you only provided the
    ad hominem error of reasoning. That is very lame."
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Sun Jul 27 21:58:05 2025
    From Newsgroup: comp.ai.philosophy

    On 7/27/2025 9:48 PM, Richard Damon wrote:
    On 7/27/25 8:20 PM, olcott wrote:
    On 7/27/2025 7:07 PM, Richard Damon wrote:
    On 7/27/25 5:46 PM, olcott wrote:
    On 7/27/2025 4:31 PM, Richard Damon wrote:
    On 7/27/25 4:28 PM, olcott wrote:
    On 7/27/2025 2:58 PM, Richard Damon wrote:
    On 7/27/25 9:50 AM, olcott wrote:
    On 7/27/2025 6:11 AM, Richard Damon wrote:
    On 7/26/25 10:43 PM, olcott wrote:>>
    When HHH(DDD) simulates DDD it also simulates itself
    simulating DDD because DDD calls HHH(DDD).

    But can only do that if HHH is part of its input, or it is not >>>>>>>>> simulating its input.

    And, it FAILS at simulating itself, as it concludes that
    HHH(DDD) will never return, when it does.


    This ChatGPT analysis of its input below
    correctly derives both of our views. I did
    not bias this analysis by telling ChatGPT
    what I expect to see.

    typedef void (*ptr)();
    int HHH(ptr P);

    void DDD()
    {
       HHH(DDD);
       return;
    }

    int main()
    {
       HHH(DDD);
       DDD();
    }

    Simulating Termination Analyzer HHH correctly simulates its
    input until:
    (a) It detects a non-terminating behavior pattern then it aborts >>>>>>>> its simulation and returns 0,
    (b) Its simulated input reaches its simulated "return" statement >>>>>>>> then it returns 1.

    https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c >>>>>>>>

    Just proves that you have contaminated the learning with false
    idea about programs.


    I made sure that ChatGPT isolates this conversation
    from everything else that I ever said. Besides telling
    ChatGPT about the possibility of a simulating termination
    analyzer (that I have proved does work on some inputs)
    it figured out all the rest on its own without any
    prompting from me.


    You CAN'T totally isolate it. You can tell it to not use what you
    have told it previously (which you did not do),

    ChatGPT remember prior conversations
    is turned off

    My Account
      Settings
       Personalization
        Memory
         Reference saved memories
    This is important because I need to know the
    minimum basis that it needs to understand what
    I said so that I can know that I have no gaps
    in my reasoning.

    But that setting isn't perfect.


    but anything said to the AI, has a chance of being recorded and
    used for future training.


    During periodic updates.

    And you have been posting your lies on usenet, which is a source of
    training, for awhile.


    Just think, you might be the one responsible for providing the lies >>>>> that future AIs have decided to accept ruining the chance of some
    future breakthrough.

    The above input that I provided has zero falsehoods.
    ChatGPT figured out all of the reasoning from that
    basis.


    But. not full definitions, like the fact that a given program on a
    given input will always do the same thing.


    When DDD is emulated by HHH it must emulate
    DDD calling itself in recursive emulation.

    When DDD is emulated by HHH1 it need not emulate
    itself at all.

    But "itself" doesn't matter to x86 instructions,
    By itself I mean the exact same machine code bytes
    at the exact same machine address.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Fred. Zwarts@F.Zwarts@HetNet.nl to comp.theory,sci.logic,comp.ai.philosophy on Mon Jul 28 10:57:55 2025
    From Newsgroup: comp.ai.philosophy

    Op 27.jul.2025 om 01:43 schreef olcott:
    On 7/26/2025 6:35 PM, Richard Damon wrote:
    On 7/26/25 7:08 PM, olcott wrote:
    On 7/26/2025 5:49 PM, olcott wrote:
    On 7/26/2025 2:58 PM, olcott wrote:
    On 7/26/2025 2:52 PM, Mr Flibble wrote:
    On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:

    On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
    In comp.theory olcott <polcott333@gmail.com> wrote:

    The error of all of the halting problem proofs is that they >>>>>>>>> require a
    Turing machine halt decider to report on the behavior of a
    directly
    executed Turing machine.

    It is common knowledge that no Turing machine decider can take >>>>>>>>> another
    directly executing Turing machine as an input, thus the above >>>>>>>>> requirement is not precisely correct.

    When we correct the error of this incorrect requirement it
    becomes a
    Turing machine decider indirectly reports on the behavior of a >>>>>>>>> directly executing Turing machine through the proxy of a finite >>>>>>>>> string
    description of this machine.

    Now I have proven and corrected the error of all of the halting >>>>>>>>> problem proofs.

    No you haven't, the subject matter is too far beyond your
    intellectual
    capacity.


    It only seems to you that I lack understanding because you are so >>>>>>> sure
    that I must be wrong that you make sure to totally ignore the subtle >>>>>>> nuances of meaning that proves I am correct.

    No Turing machine based (at least partial) halt decider can possibly >>>>>>> *directly* report on the behavior of any directly executing Turing >>>>>>> machine.  The best that any of them can possibly do is indirectly >>>>>>> report
    on this behavior through the proxy of a finite string machine
    description.

    Partial decidability is not a hard problem.

    /Flibble

    My point is that all of the halting problem proofs
    are wrong when they require a Turing machine decider
    H to report on the behavior of machine M on input i
    because machine M is not in the domain of any Turing
    machine decider. Only finite strings such as ⟨M⟩ the
    Turing machine description of machine M are its
    domain.


    Definition of Turing Machine Ĥ
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
       if Ĥ applied to ⟨Ĥ⟩ halts, and        // incorrect requirement
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
       if Ĥ applied to ⟨Ĥ⟩ does not halt.    // incorrect requirement

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...

    The fact that the correctly simulated input
    specifies recursive simulation prevents the
    simulated ⟨Ĥ⟩ from ever reaching its simulated
    final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.

    This is not contradicted by the fact that
    Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
    the domain of every Turing machine computed function.


    In the atypical case where the behavior of the simulation
    of an input to a potential halt decider disagrees with the
    behavior of the direct execution of the underlying machine
    (because this input calls this same simulating decider) it
    is the behavior of the input that rules because deciders
    compute the mapping for their inputs.


    Nope, just more of your lies.

    The behavior of an input to a halt decider is DEFINED in all cases to
    be the behavior of the machine the input represents,

    Yet I have conclusively proven otherwise and
    you are too stupid to understand the proof.

    That was not a proof, but an assumption with a huge mistake.


    You are so stupid that you think you can get
    away with disagreeing with the x86 language.

    The x86 language shows that the input specifies a halting program.
    If you are unable to see that, you need to study the x86 language
    somewhat more.


    _DDD()
    [00002192] 55         push ebp
    [00002193] 8bec       mov ebp,esp
    [00002195] 6892210000 push 00002192  // push DDD
    [0000219a] e833f4ffff call 000015d2  // call HHH
    [0000219f] 83c404     add esp,+04
    [000021a2] 5d         pop ebp
    [000021a3] c3         ret
    Size in bytes:(0018) [000021a3]

    You have been told many times that these 18 bytes do not specify the
    full input. In fact, they are the least interesting part of the input.
    DDD is not needed:

    int main() {
    return HHH(main);
    }

    Here is no DDD, but you told us that also in this case HHH produces a
    false negative by halting and reporting that it does not halt.
    The most interesting part of the input is HHH itself.
    It is clear that HHH produces many false negatives when its own code is
    part of the input.


    DDD simulated by HHH according to the rules of the
    x86 language does not fucking halt you fucking moron.
    If any definition says otherwise then this definition
    is fucked up.


    I see you have no counter arguments, except swearing and using claims
    without any evidence.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Fred. Zwarts@F.Zwarts@HetNet.nl to comp.theory,sci.logic,comp.ai.philosophy on Mon Jul 28 11:13:34 2025
    From Newsgroup: comp.ai.philosophy

    Op 27.jul.2025 om 01:28 schreef olcott:
    On 7/26/2025 5:49 PM, olcott wrote:
    On 7/26/2025 2:58 PM, olcott wrote:
    On 7/26/2025 2:52 PM, Mr Flibble wrote:
    On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:

    On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
    In comp.theory olcott <polcott333@gmail.com> wrote:

    The error of all of the halting problem proofs is that they
    require a
    Turing machine halt decider to report on the behavior of a directly >>>>>>> executed Turing machine.

    It is common knowledge that no Turing machine decider can take
    another
    directly executing Turing machine as an input, thus the above
    requirement is not precisely correct.

    When we correct the error of this incorrect requirement it becomes a >>>>>>> Turing machine decider indirectly reports on the behavior of a
    directly executing Turing machine through the proxy of a finite >>>>>>> string
    description of this machine.

    Now I have proven and corrected the error of all of the halting
    problem proofs.

    No you haven't, the subject matter is too far beyond your
    intellectual
    capacity.


    It only seems to you that I lack understanding because you are so sure >>>>> that I must be wrong that you make sure to totally ignore the subtle >>>>> nuances of meaning that proves I am correct.

    No Turing machine based (at least partial) halt decider can possibly >>>>> *directly* report on the behavior of any directly executing Turing
    machine.  The best that any of them can possibly do is indirectly
    report
    on this behavior through the proxy of a finite string machine
    description.

    Partial decidability is not a hard problem.

    /Flibble

    My point is that all of the halting problem proofs
    are wrong when they require a Turing machine decider
    H to report on the behavior of machine M on input i
    because machine M is not in the domain of any Turing
    machine decider. Only finite strings such as ⟨M⟩ the
    Turing machine description of machine M are its
    domain.


    Definition of Turing Machine Ĥ
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
       if Ĥ applied to ⟨Ĥ⟩ halts, and        // incorrect requirement
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
       if Ĥ applied to ⟨Ĥ⟩ does not halt.    // incorrect requirement >>
    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...

    The fact that the correctly simulated input
    specifies recursive simulation prevents the
    simulated ⟨Ĥ⟩ from ever reaching its simulated
    final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.

    This is not contradicted by the fact that
    Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
    the domain of every Turing machine computed function.


    In the atypical case where the behavior of the simulation
    of an input to a potential halt decider disagrees with the
    behavior of the direct execution of the underlying machine
    (because this input calls this same simulating decider) it
    is the behavior of the input that rules because deciders
    compute the mapping *FROM* their inputs.

    But the input specifies halting behaviour, but the decider is unable to
    see that. If it is blind for something, it does not mean that it does
    not exist. That is your huge mistake.
    That is behaviour of the simulator, not that of the program being simulated. --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Mon Jul 28 07:38:48 2025
    From Newsgroup: comp.ai.philosophy

    On 7/27/25 10:58 PM, olcott wrote:
    On 7/27/2025 9:48 PM, Richard Damon wrote:
    On 7/27/25 8:20 PM, olcott wrote:
    On 7/27/2025 7:07 PM, Richard Damon wrote:
    On 7/27/25 5:46 PM, olcott wrote:
    On 7/27/2025 4:31 PM, Richard Damon wrote:
    On 7/27/25 4:28 PM, olcott wrote:
    On 7/27/2025 2:58 PM, Richard Damon wrote:
    On 7/27/25 9:50 AM, olcott wrote:
    On 7/27/2025 6:11 AM, Richard Damon wrote:
    On 7/26/25 10:43 PM, olcott wrote:>>
    When HHH(DDD) simulates DDD it also simulates itself
    simulating DDD because DDD calls HHH(DDD).

    But can only do that if HHH is part of its input, or it is not >>>>>>>>>> simulating its input.

    And, it FAILS at simulating itself, as it concludes that
    HHH(DDD) will never return, when it does.


    This ChatGPT analysis of its input below
    correctly derives both of our views. I did
    not bias this analysis by telling ChatGPT
    what I expect to see.

    typedef void (*ptr)();
    int HHH(ptr P);

    void DDD()
    {
       HHH(DDD);
       return;
    }

    int main()
    {
       HHH(DDD);
       DDD();
    }

    Simulating Termination Analyzer HHH correctly simulates its >>>>>>>>> input until:
    (a) It detects a non-terminating behavior pattern then it
    aborts its simulation and returns 0,
    (b) Its simulated input reaches its simulated "return"
    statement then it returns 1.

    https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c >>>>>>>>>

    Just proves that you have contaminated the learning with false >>>>>>>> idea about programs.


    I made sure that ChatGPT isolates this conversation
    from everything else that I ever said. Besides telling
    ChatGPT about the possibility of a simulating termination
    analyzer (that I have proved does work on some inputs)
    it figured out all the rest on its own without any
    prompting from me.


    You CAN'T totally isolate it. You can tell it to not use what you >>>>>> have told it previously (which you did not do),

    ChatGPT remember prior conversations
    is turned off

    My Account
      Settings
       Personalization
        Memory
         Reference saved memories
    This is important because I need to know the
    minimum basis that it needs to understand what
    I said so that I can know that I have no gaps
    in my reasoning.

    But that setting isn't perfect.


    but anything said to the AI, has a chance of being recorded and
    used for future training.


    During periodic updates.

    And you have been posting your lies on usenet, which is a source of
    training, for awhile.


    Just think, you might be the one responsible for providing the
    lies that future AIs have decided to accept ruining the chance of >>>>>> some future breakthrough.

    The above input that I provided has zero falsehoods.
    ChatGPT figured out all of the reasoning from that
    basis.


    But. not full definitions, like the fact that a given program on a
    given input will always do the same thing.


    When DDD is emulated by HHH it must emulate
    DDD calling itself in recursive emulation.

    When DDD is emulated by HHH1 it need not emulate
    itself at all.

    But "itself" doesn't matter to x86 instructions,
    By itself I mean the exact same machine code bytes
    at the exact same machine address.


    Which doesn't affect the behavior of those bytes.

    SO. the "itself" is just irrelevent.

    Your failure to understand that just shows your stupidity.

    If you want to disagree, show what the difference does in an actually correctly simulation per the x86 rules.

    Your failure just shows you know you are lying.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Mon Jul 28 07:34:48 2025
    From Newsgroup: comp.ai.philosophy

    On 7/28/2025 6:38 AM, Richard Damon wrote:
    On 7/27/25 10:58 PM, olcott wrote:
    On 7/27/2025 9:48 PM, Richard Damon wrote:
    On 7/27/25 8:20 PM, olcott wrote:
    On 7/27/2025 7:07 PM, Richard Damon wrote:
    On 7/27/25 5:46 PM, olcott wrote:
    On 7/27/2025 4:31 PM, Richard Damon wrote:
    On 7/27/25 4:28 PM, olcott wrote:
    On 7/27/2025 2:58 PM, Richard Damon wrote:
    On 7/27/25 9:50 AM, olcott wrote:
    On 7/27/2025 6:11 AM, Richard Damon wrote:
    On 7/26/25 10:43 PM, olcott wrote:>>
    When HHH(DDD) simulates DDD it also simulates itself
    simulating DDD because DDD calls HHH(DDD).

    But can only do that if HHH is part of its input, or it is >>>>>>>>>>> not simulating its input.

    And, it FAILS at simulating itself, as it concludes that >>>>>>>>>>> HHH(DDD) will never return, when it does.


    This ChatGPT analysis of its input below
    correctly derives both of our views. I did
    not bias this analysis by telling ChatGPT
    what I expect to see.

    typedef void (*ptr)();
    int HHH(ptr P);

    void DDD()
    {
       HHH(DDD);
       return;
    }

    int main()
    {
       HHH(DDD);
       DDD();
    }

    Simulating Termination Analyzer HHH correctly simulates its >>>>>>>>>> input until:
    (a) It detects a non-terminating behavior pattern then it >>>>>>>>>> aborts its simulation and returns 0,
    (b) Its simulated input reaches its simulated "return"
    statement then it returns 1.

    https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c >>>>>>>>>>

    Just proves that you have contaminated the learning with false >>>>>>>>> idea about programs.


    I made sure that ChatGPT isolates this conversation
    from everything else that I ever said. Besides telling
    ChatGPT about the possibility of a simulating termination
    analyzer (that I have proved does work on some inputs)
    it figured out all the rest on its own without any
    prompting from me.


    You CAN'T totally isolate it. You can tell it to not use what you >>>>>>> have told it previously (which you did not do),

    ChatGPT remember prior conversations
    is turned off

    My Account
      Settings
       Personalization
        Memory
         Reference saved memories
    This is important because I need to know the
    minimum basis that it needs to understand what
    I said so that I can know that I have no gaps
    in my reasoning.

    But that setting isn't perfect.


    but anything said to the AI, has a chance of being recorded and >>>>>>> used for future training.


    During periodic updates.

    And you have been posting your lies on usenet, which is a source of >>>>> training, for awhile.


    Just think, you might be the one responsible for providing the
    lies that future AIs have decided to accept ruining the chance of >>>>>>> some future breakthrough.

    The above input that I provided has zero falsehoods.
    ChatGPT figured out all of the reasoning from that
    basis.


    But. not full definitions, like the fact that a given program on a
    given input will always do the same thing.


    When DDD is emulated by HHH it must emulate
    DDD calling itself in recursive emulation.

    When DDD is emulated by HHH1 it need not emulate
    itself at all.

    But "itself" doesn't matter to x86 instructions,
    By itself I mean the exact same machine code bytes
    at the exact same machine address.


    Which doesn't affect the behavior of those bytes.


    void DDD()
    {
    HHH(DDD);
    return;
    }

    That you are too stupid to understand that DDD simulated
    by HHH does call HHH in recursive emulation even after
    I have provided fully operational code of DDD calling
    HHH(DDD) in recursive emulation *IS NOT A REBUTTAL*

    https://github.com/plolcott/x86utm/blob/master/Halt7.c
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Mon Jul 28 08:22:46 2025
    From Newsgroup: comp.ai.philosophy

    On 7/28/2025 3:57 AM, Fred. Zwarts wrote:
    Op 27.jul.2025 om 01:43 schreef olcott:
    On 7/26/2025 6:35 PM, Richard Damon wrote:

    The behavior of an input to a halt decider is DEFINED in all cases to
    be the behavior of the machine the input represents,

    Yet I have conclusively proven otherwise and
    you are too stupid to understand the proof.

    That was not a proof, but an assumption with a huge mistake.


    https://www.researchgate.net/publication/394042683_ChatGPT_analyzes_HHHDDD

    ChatGPT agrees that HHH(DDD)==0 is correct even though
    DDD() halts.

    Saying that H is required report on the behavior of
    machine M is a category error.

    Turing machines cannot directly report on the behavior
    of other Turing machines they can at best indirectly
    report on the behavior of Turing machines through the
    proxy of finite string machine descriptions such as ⟨M⟩.

    Thus the behavior specified by the input finite string
    overrules and supersedes the behavior of the direct
    execution.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Mon Jul 28 08:36:37 2025
    From Newsgroup: comp.ai.philosophy

    On 7/28/2025 4:13 AM, Fred. Zwarts wrote:
    Op 27.jul.2025 om 01:28 schreef olcott:
    On 7/26/2025 5:49 PM, olcott wrote:
    On 7/26/2025 2:58 PM, olcott wrote:
    On 7/26/2025 2:52 PM, Mr Flibble wrote:
    On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:

    On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
    In comp.theory olcott <polcott333@gmail.com> wrote:

    The error of all of the halting problem proofs is that they
    require a
    Turing machine halt decider to report on the behavior of a directly >>>>>>>> executed Turing machine.

    It is common knowledge that no Turing machine decider can take >>>>>>>> another
    directly executing Turing machine as an input, thus the above
    requirement is not precisely correct.

    When we correct the error of this incorrect requirement it
    becomes a
    Turing machine decider indirectly reports on the behavior of a >>>>>>>> directly executing Turing machine through the proxy of a finite >>>>>>>> string
    description of this machine.

    Now I have proven and corrected the error of all of the halting >>>>>>>> problem proofs.

    No you haven't, the subject matter is too far beyond your
    intellectual
    capacity.


    It only seems to you that I lack understanding because you are so >>>>>> sure
    that I must be wrong that you make sure to totally ignore the subtle >>>>>> nuances of meaning that proves I am correct.

    No Turing machine based (at least partial) halt decider can possibly >>>>>> *directly* report on the behavior of any directly executing Turing >>>>>> machine.  The best that any of them can possibly do is indirectly >>>>>> report
    on this behavior through the proxy of a finite string machine
    description.

    Partial decidability is not a hard problem.

    /Flibble

    My point is that all of the halting problem proofs
    are wrong when they require a Turing machine decider
    H to report on the behavior of machine M on input i
    because machine M is not in the domain of any Turing
    machine decider. Only finite strings such as ⟨M⟩ the
    Turing machine description of machine M are its
    domain.


    Definition of Turing Machine Ĥ
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
       if Ĥ applied to ⟨Ĥ⟩ halts, and        // incorrect requirement
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
       if Ĥ applied to ⟨Ĥ⟩ does not halt.    // incorrect requirement

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...

    The fact that the correctly simulated input
    specifies recursive simulation prevents the
    simulated ⟨Ĥ⟩ from ever reaching its simulated
    final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.

    This is not contradicted by the fact that
    Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
    the domain of every Turing machine computed function.


    In the atypical case where the behavior of the simulation
    of an input to a potential halt decider disagrees with the
    behavior of the direct execution of the underlying machine
    (because this input calls this same simulating decider) it
    is the behavior of the input that rules because deciders
    compute the mapping *FROM* their inputs.

    But the input specifies halting behaviour,
    It never was the actual input that specifies non-halting
    behavior. It was the non-input direct execution that
    is not in the domain of any Turing machine based halt
    decider.

    The behavior that the input specifies is determined
    by ⟨Ĥ⟩ ⟨Ĥ⟩ simulated by embedded_H.

    Saying that H is required report on the behavior of
    machine M is a category error.

    Turing machines cannot directly report on the behavior
    of other Turing machines they can at best indirectly
    report on the behavior of Turing machines through the
    proxy of finite string machine descriptions such as ⟨M⟩.

    Thus the behavior specified by the input finite string
    overrules and supersedes the behavior of the direct
    execution.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Mon Jul 28 19:26:11 2025
    From Newsgroup: comp.ai.philosophy

    On 7/28/25 8:34 AM, olcott wrote:
    On 7/28/2025 6:38 AM, Richard Damon wrote:
    On 7/27/25 10:58 PM, olcott wrote:
    On 7/27/2025 9:48 PM, Richard Damon wrote:
    On 7/27/25 8:20 PM, olcott wrote:
    On 7/27/2025 7:07 PM, Richard Damon wrote:
    On 7/27/25 5:46 PM, olcott wrote:
    On 7/27/2025 4:31 PM, Richard Damon wrote:
    On 7/27/25 4:28 PM, olcott wrote:
    On 7/27/2025 2:58 PM, Richard Damon wrote:
    On 7/27/25 9:50 AM, olcott wrote:
    On 7/27/2025 6:11 AM, Richard Damon wrote:
    On 7/26/25 10:43 PM, olcott wrote:>>
    When HHH(DDD) simulates DDD it also simulates itself >>>>>>>>>>>>> simulating DDD because DDD calls HHH(DDD).

    But can only do that if HHH is part of its input, or it is >>>>>>>>>>>> not simulating its input.

    And, it FAILS at simulating itself, as it concludes that >>>>>>>>>>>> HHH(DDD) will never return, when it does.


    This ChatGPT analysis of its input below
    correctly derives both of our views. I did
    not bias this analysis by telling ChatGPT
    what I expect to see.

    typedef void (*ptr)();
    int HHH(ptr P);

    void DDD()
    {
       HHH(DDD);
       return;
    }

    int main()
    {
       HHH(DDD);
       DDD();
    }

    Simulating Termination Analyzer HHH correctly simulates its >>>>>>>>>>> input until:
    (a) It detects a non-terminating behavior pattern then it >>>>>>>>>>> aborts its simulation and returns 0,
    (b) Its simulated input reaches its simulated "return"
    statement then it returns 1.

    https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c >>>>>>>>>>>

    Just proves that you have contaminated the learning with false >>>>>>>>>> idea about programs.


    I made sure that ChatGPT isolates this conversation
    from everything else that I ever said. Besides telling
    ChatGPT about the possibility of a simulating termination
    analyzer (that I have proved does work on some inputs)
    it figured out all the rest on its own without any
    prompting from me.


    You CAN'T totally isolate it. You can tell it to not use what >>>>>>>> you have told it previously (which you did not do),

    ChatGPT remember prior conversations
    is turned off

    My Account
      Settings
       Personalization
        Memory
         Reference saved memories
    This is important because I need to know the
    minimum basis that it needs to understand what
    I said so that I can know that I have no gaps
    in my reasoning.

    But that setting isn't perfect.


    but anything said to the AI, has a chance of being recorded and >>>>>>>> used for future training.


    During periodic updates.

    And you have been posting your lies on usenet, which is a source
    of training, for awhile.


    Just think, you might be the one responsible for providing the >>>>>>>> lies that future AIs have decided to accept ruining the chance >>>>>>>> of some future breakthrough.

    The above input that I provided has zero falsehoods.
    ChatGPT figured out all of the reasoning from that
    basis.


    But. not full definitions, like the fact that a given program on a >>>>>> given input will always do the same thing.


    When DDD is emulated by HHH it must emulate
    DDD calling itself in recursive emulation.

    When DDD is emulated by HHH1 it need not emulate
    itself at all.

    But "itself" doesn't matter to x86 instructions,
    By itself I mean the exact same machine code bytes
    at the exact same machine address.


    Which doesn't affect the behavior of those bytes.


    void DDD()
    {
      HHH(DDD);
      return;
    }

    That you are too stupid to understand that DDD simulated
    by HHH does call HHH in recursive emulation even after
    I have provided fully operational code of DDD calling
    HHH(DDD) in recursive emulation *IS NOT A REBUTTAL*

    https://github.com/plolcott/x86utm/blob/master/Halt7.c


    What you are too stupid to understand is that while the *PROGRAM* HHH,
    which does the specific actions it is defined to, when it simulates the
    input that represents the *PROGRAM* DDD, which by definition includes
    the code of the HHH that it is built on, that will not reach the final
    state.

    YOUR HHH, doesn't seem to be a program, as you tall about it doing
    different things (sometimes correctly simulating, which requires not
    aborting, and sometimes deciding which requires aborting) and it looking
    at an input DDD that isn't actually a program, as it doesn't contain the
    code of the specific HHH it is built on, and thus is really a bunch of different inputs, one for each HHH, because it gets paired with
    different versions of that HHH that do different things.

    THAT HHH, can't correctly simulate its input, since the input isn't a
    program and doesn't have a specific behavior. Your argument becomes a
    category error as your argument isn't based on fixed programs.

    You point to your Halt7.c, but then you ignore the HHH that is there
    when you talk about HHH. The HHH in Halt7.c, and the DDD in it, *ARE* programs, with specific behaivor.

    THAT HHH, aborts its simultion and does not do a correct simuation.

    THAT DDD, calls that HHH, which returns 0 to it, so it halts.

    THAT makes your HHH wrong.

    WHen you describe you system as anything other than that, it is just a lie.

    When you talk about the infinte set of HHHs. that is a LIE, as there
    isn't an infinite set of HHHs in Halt7.c

    When you talk about HHH doing a correct simulation, that is a LIE, as
    HHH aborts its simulation, thus NOT making it correct per the term-of-art.

    Your problem is you just don't know what you are talking about, because
    you chose to make your self ignorant, and then ignore when people try to
    teach you the actual meaning of the words.

    You think you can change the meaning of words, which is just a LIE, and
    that is what makes you a pathological liar, because you don't think
    lying by using wrong meanings is wrong, because you don't understand
    what truth is.

    Sorry, you are just shownig you true nature, that of an ignorant
    pathatic pathological lying idiot that doesn't care about the real
    meaning of the words he uses, just that you want to try to make a
    convincing lie to get some people to fall for your wrong ideas.

    That is going to take you to that lake of fire, as that is what it was
    talking about, those that live a life based on lies, can not be with
    God, but will forever be outside.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Mon Jul 28 18:44:28 2025
    From Newsgroup: comp.ai.philosophy

    On 7/28/2025 6:26 PM, Richard Damon wrote:
    On 7/28/25 8:34 AM, olcott wrote:

    void DDD()
    {
       HHH(DDD);
       return;
    }

    That you are too stupid to understand that DDD simulated
    by HHH does call HHH in recursive emulation even after
    I have provided fully operational code of DDD calling
    HHH(DDD) in recursive emulation *IS NOT A REBUTTAL*

    https://github.com/plolcott/x86utm/blob/master/Halt7.c


    What you are too stupid to understand is that while the *PROGRAM* HHH,
    which does the specific actions it is defined to, when it simulates the input that represents the *PROGRAM* DDD, which by definition includes
    the code of the HHH that it is built on, that will not reach the final state.


    HHH correctly predicts that DDD correctly simulated
    by HHH cannot possibly reach its simulated "return"
    statement final halt state. This is because DDD does
    call HHH(DDD) in recursive simulation.

    *Within those exact words I am exactly correct*
    Trying to change those *EXACT WORDS* to show that
    I am incorrect *IS CHEATING*

    After we have mutual agreement *ON THOSE EXACT WORDS*
    thenn (then and only then) we can begin discussing
    whether or not those words are relevant.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Mon Jul 28 18:58:09 2025
    From Newsgroup: comp.ai.philosophy

    On 7/28/2025 6:49 PM, Richard Damon wrote:
    On 7/28/25 7:20 PM, olcott wrote:
    On 7/28/2025 5:57 PM, Richard Damon wrote:
    On 7/28/25 9:54 AM, olcott wrote:
    On 7/28/2025 8:21 AM, joes wrote:
    Am Mon, 28 Jul 2025 07:11:11 -0500 schrieb olcott:
    On 7/28/2025 2:30 AM, joes wrote:
    Am Sun, 27 Jul 2025 21:58:05 -0500 schrieb olcott:
    On 7/27/2025 9:48 PM, Richard Damon wrote:
    On 7/27/25 8:20 PM, olcott wrote:

    When DDD is emulated by HHH1 it need not emulate itself at all. >>>>>>>>> But "itself" doesn't matter to x86 instructions,
    By itself I mean the exact same machine code bytes at the exact >>>>>>>> same
    machine address.
    Yeah, so when you change HHH to abort later, you also change DDD. >>>>>> HHH is never changed.

    It is changed in the hypothetical unaborted simulation. HHH is
    reporting
    on UTM(HHH', DDD) where HHH' calls UTM(DDD), and not on the halting >>>>> DDD,
    and definitely not on HHH(DDD), itself.


    All halt deciders are required to predict the behavior
    of their input. HHH does correctly predict that DDD correctly
    simulated by HHH cannot possibly reach its own simulated
    "return" instruction final halt state.


    How is it a "correct prediction" if it sees something different than
    what that DDD does.


    What DDD does is keep calling HHH(DDD) in recursive
    simulation until HHH kills this whole process.

    But the behavior of the program continues past that (something you don't seem to understand) and that behavior will also have its HHH terminate
    the DDD it is simulating and return 0 to DDD and then Halt.

    Your problem is you don't understand that the simulating HHH doesn't
    define the behavior of DDD, it is the execution of DDD that defines what
    a correct simulation of it is.


    Remember, to have simulated that DDD, it must have include the code
    of the HHH that it was based on, which is the HHH that made the
    prediction, and thus returns 0, so DDD will halt.


    We are not asking: Does DDD() halt.
    That is (as it turns out) an incorrect question.

    No, that is EXACTLY the question.

    I guess you are just admitting that you whole world is based on LYING
    about what things are supposed to be.


    Turing machines cannot directly report on the behavior
    of other Turing machines they can at best indirectly
    report on the behavior of Turing machines through the
    proxy of finite string machine descriptions such as ⟨M⟩.

    Right, and HHH was given the equivalenet of (M) by being given the code
    of *ALL* of DDD

    I guess you don't understand that fact, even though you CLAIM the input
    is the proper representation of DDD.


    Thus the behavior specified by the input finite string
    overrules and supersedes the behavior of the direct
    execution.

    No, it is DEFINED to be the behavior of the direct execution of the
    program it represent.


    *That has always been the fatal flaw of all of the proofs*
    We could equally define the area of a square circle
    as its radius multiplied by the length of one of its sides.

    It never has been that DDD simulated by HHH is incorrect
    because it does not agree with what people expect to see.

    It has always been that it is correct because it matches
    the semantics that the code specifies.

    DDD simulated by HHH specifies that DDD keeps calling
    HHH in recursive simulation until HHH kills the whole
    process of DDD.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Mon Jul 28 21:56:16 2025
    From Newsgroup: comp.ai.philosophy

    On 7/28/25 7:58 PM, olcott wrote:
    On 7/28/2025 6:49 PM, Richard Damon wrote:
    On 7/28/25 7:20 PM, olcott wrote:
    On 7/28/2025 5:57 PM, Richard Damon wrote:
    On 7/28/25 9:54 AM, olcott wrote:
    On 7/28/2025 8:21 AM, joes wrote:
    Am Mon, 28 Jul 2025 07:11:11 -0500 schrieb olcott:
    On 7/28/2025 2:30 AM, joes wrote:
    Am Sun, 27 Jul 2025 21:58:05 -0500 schrieb olcott:
    On 7/27/2025 9:48 PM, Richard Damon wrote:
    On 7/27/25 8:20 PM, olcott wrote:

    When DDD is emulated by HHH1 it need not emulate itself at all. >>>>>>>>>> But "itself" doesn't matter to x86 instructions,
    By itself I mean the exact same machine code bytes at the exact >>>>>>>>> same
    machine address.
    Yeah, so when you change HHH to abort later, you also change DDD. >>>>>>> HHH is never changed.

    It is changed in the hypothetical unaborted simulation. HHH is
    reporting
    on UTM(HHH', DDD) where HHH' calls UTM(DDD), and not on the
    halting DDD,
    and definitely not on HHH(DDD), itself.


    All halt deciders are required to predict the behavior
    of their input. HHH does correctly predict that DDD correctly
    simulated by HHH cannot possibly reach its own simulated
    "return" instruction final halt state.


    How is it a "correct prediction" if it sees something different than
    what that DDD does.


    What DDD does is keep calling HHH(DDD) in recursive
    simulation until HHH kills this whole process.

    But the behavior of the program continues past that (something you
    don't seem to understand) and that behavior will also have its HHH
    terminate the DDD it is simulating and return 0 to DDD and then Halt.

    Your problem is you don't understand that the simulating HHH doesn't
    define the behavior of DDD, it is the execution of DDD that defines
    what a correct simulation of it is.


    Remember, to have simulated that DDD, it must have include the code
    of the HHH that it was based on, which is the HHH that made the
    prediction, and thus returns 0, so DDD will halt.


    We are not asking: Does DDD() halt.
    That is (as it turns out) an incorrect question.

    No, that is EXACTLY the question.

    I guess you are just admitting that you whole world is based on LYING
    about what things are supposed to be.


    Turing machines cannot directly report on the behavior
    of other Turing machines they can at best indirectly
    report on the behavior of Turing machines through the
    proxy of finite string machine descriptions such as ⟨M⟩.

    Right, and HHH was given the equivalenet of (M) by being given the
    code of *ALL* of DDD

    I guess you don't understand that fact, even though you CLAIM the
    input is the proper representation of DDD.


    Thus the behavior specified by the input finite string
    overrules and supersedes the behavior of the direct
    execution.

    No, it is DEFINED to be the behavior of the direct execution of the
    program it represent.


    *That has always been the fatal flaw of all of the proofs*

    No, your failure to follow the rules is what makes you just a liar.

    We could equally define the area of a square circle
    as its radius multiplied by the length of one of its sides.

    No, because "area" has a specific definition. The formula to compute it
    is NOT its definition, but someting proved from the definition, and
    other axioms.

    Of course to claim a statement not derived from definitions and axioms
    is an error.


    It never has been that DDD simulated by HHH is incorrect
    because it does not agree with what people expect to see.

    Sure it has, as "Correct" has a definition, as does "Simulation".


    It has always been that it is correct because it matches
    the semantics that the code specifies.

    Right, *ALL* the code, not a partial simulation.

    Note, that code includes the code of the SPECIFIC HHH that DDD was built on.


    DDD simulated by HHH specifies that DDD keeps calling
    HHH in recursive simulation until HHH kills the whole
    process of DDD.


    Nope, By that definition, HHH just keeps simulating and can never abort
    to return an answer.

    The two version of HHH are defined by the proof to be the same, and thus
    act the same.

    It seems you think programs are not deterministic, showing you ignorance
    of the topic.

    Sorry, you are just proving your stupidity and refusal to follow rules.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Mon Jul 28 22:00:00 2025
    From Newsgroup: comp.ai.philosophy

    On 7/28/25 7:44 PM, olcott wrote:
    On 7/28/2025 6:26 PM, Richard Damon wrote:
    On 7/28/25 8:34 AM, olcott wrote:

    void DDD()
    {
       HHH(DDD);
       return;
    }

    That you are too stupid to understand that DDD simulated
    by HHH does call HHH in recursive emulation even after
    I have provided fully operational code of DDD calling
    HHH(DDD) in recursive emulation *IS NOT A REBUTTAL*

    https://github.com/plolcott/x86utm/blob/master/Halt7.c


    What you are too stupid to understand is that while the *PROGRAM* HHH,
    which does the specific actions it is defined to, when it simulates
    the input that represents the *PROGRAM* DDD, which by definition
    includes the code of the HHH that it is built on, that will not reach
    the final state.


    HHH correctly predicts that DDD correctly simulated
    by HHH cannot possibly reach its simulated "return"
    statement final halt state. This is because DDD does
    call HHH(DDD) in recursive simulation.

    Can't do that, as HHH doesn't correct simulate its input, since correct simulation requires being complete.

    It seems you think answering 1 question on a 100 question test and
    turning it in could earn you a 100% on the test.


    *Within those exact words I am exactly correct*
    Trying to change those *EXACT WORDS* to show that
    I am incorrect *IS CHEATING*

    Note when you intended meaning of the words is incorrect.

    That just shows that you think lying is proper logic.


    After we have mutual agreement *ON THOSE EXACT WORDS*
    thenn (then and only then) we can begin discussing
    whether or not those words are relevant.


    Since you words are self-contradictory and based on category error, that
    is unlikely.

    Note, your just repeating the error shows that you are either so stupid
    you can not learn from your mistakes, or don't care about the truth, but
    just like repeating your lies.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Mon Jul 28 21:32:07 2025
    From Newsgroup: comp.ai.philosophy

    On 7/28/2025 9:00 PM, Richard Damon wrote:
    On 7/28/25 7:44 PM, olcott wrote:
    On 7/28/2025 6:26 PM, Richard Damon wrote:
    On 7/28/25 8:34 AM, olcott wrote:

    void DDD()
    {
       HHH(DDD);
       return;
    }

    That you are too stupid to understand that DDD simulated
    by HHH does call HHH in recursive emulation even after
    I have provided fully operational code of DDD calling
    HHH(DDD) in recursive emulation *IS NOT A REBUTTAL*

    https://github.com/plolcott/x86utm/blob/master/Halt7.c


    What you are too stupid to understand is that while the *PROGRAM*
    HHH, which does the specific actions it is defined to, when it
    simulates the input that represents the *PROGRAM* DDD, which by
    definition includes the code of the HHH that it is built on, that
    will not reach the final state.


    HHH correctly predicts that DDD correctly simulated
    by HHH cannot possibly reach its simulated "return"
    statement final halt state. This is because DDD does
    call HHH(DDD) in recursive simulation.

    Can't do that, as HHH doesn't correct simulate its input, since correct simulation requires being complete.


    Never heard of mathematical induction?
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Mon Jul 28 22:36:45 2025
    From Newsgroup: comp.ai.philosophy

    On 7/28/25 10:32 PM, olcott wrote:
    On 7/28/2025 9:00 PM, Richard Damon wrote:
    On 7/28/25 7:44 PM, olcott wrote:
    On 7/28/2025 6:26 PM, Richard Damon wrote:
    On 7/28/25 8:34 AM, olcott wrote:

    void DDD()
    {
       HHH(DDD);
       return;
    }

    That you are too stupid to understand that DDD simulated
    by HHH does call HHH in recursive emulation even after
    I have provided fully operational code of DDD calling
    HHH(DDD) in recursive emulation *IS NOT A REBUTTAL*

    https://github.com/plolcott/x86utm/blob/master/Halt7.c


    What you are too stupid to understand is that while the *PROGRAM*
    HHH, which does the specific actions it is defined to, when it
    simulates the input that represents the *PROGRAM* DDD, which by
    definition includes the code of the HHH that it is built on, that
    will not reach the final state.


    HHH correctly predicts that DDD correctly simulated
    by HHH cannot possibly reach its simulated "return"
    statement final halt state. This is because DDD does
    call HHH(DDD) in recursive simulation.

    Can't do that, as HHH doesn't correct simulate its input, since
    correct simulation requires being complete.


    Never heard of mathematical induction?



    You don't have a valid induction. The problem is every version of HHH
    gets a different version of DDD, so you can't build the induction, as
    the n and n+1 steps don't relate.

    If you don't include HHH in DDD, you can't simulate it past the call HHH instruciton, as by definition, programs in Computability Theory can only
    look at data from their input, and not other global data, unless defined
    as FIXED CONSTANTS, and if HHH is a fixed constant, you can't make your induction.

    In other words, your "induction" is just a lie you have indoctrinated
    yourself to belive in.

    All your induction does is prove that none of your HHH can prove that
    their input halts. That doesn't prove that it doesn't.

    And, in fact, we can prove that all the HHH that abort and return 0 do
    create a DDD that halts.

    Sorry, you lies are just exposed to the light of truth.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Mon Jul 28 21:57:32 2025
    From Newsgroup: comp.ai.philosophy

    On 7/28/2025 9:36 PM, Richard Damon wrote:
    On 7/28/25 10:32 PM, olcott wrote:
    On 7/28/2025 9:00 PM, Richard Damon wrote:
    On 7/28/25 7:44 PM, olcott wrote:
    On 7/28/2025 6:26 PM, Richard Damon wrote:
    On 7/28/25 8:34 AM, olcott wrote:

    void DDD()
    {
       HHH(DDD);
       return;
    }

    That you are too stupid to understand that DDD simulated
    by HHH does call HHH in recursive emulation even after
    I have provided fully operational code of DDD calling
    HHH(DDD) in recursive emulation *IS NOT A REBUTTAL*

    https://github.com/plolcott/x86utm/blob/master/Halt7.c


    What you are too stupid to understand is that while the *PROGRAM*
    HHH, which does the specific actions it is defined to, when it
    simulates the input that represents the *PROGRAM* DDD, which by
    definition includes the code of the HHH that it is built on, that
    will not reach the final state.


    HHH correctly predicts that DDD correctly simulated
    by HHH cannot possibly reach its simulated "return"
    statement final halt state. This is because DDD does
    call HHH(DDD) in recursive simulation.

    Can't do that, as HHH doesn't correct simulate its input, since
    correct simulation requires being complete.


    Never heard of mathematical induction?



    You don't have a valid induction. The problem is every version of HHH
    gets a different version of DDD, so you can't build the induction, as
    the n and n+1 steps don't relate.


    The only difference in the elements of the infinite
    set of HHH/DDD pairs where HHH emulates N instructions
    of DDD cannot possibly have any effect on whether this
    DDD instance reaches its "return" instruction final
    halt state *AND YOU HAVE ALWAYS KNOWN THAT*
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Fred. Zwarts@F.Zwarts@HetNet.nl to comp.theory,sci.logic,comp.ai.philosophy on Tue Jul 29 10:03:07 2025
    From Newsgroup: comp.ai.philosophy

    Op 28.jul.2025 om 15:36 schreef olcott:
    On 7/28/2025 4:13 AM, Fred. Zwarts wrote:
    Op 27.jul.2025 om 01:28 schreef olcott:
    On 7/26/2025 5:49 PM, olcott wrote:
    On 7/26/2025 2:58 PM, olcott wrote:
    On 7/26/2025 2:52 PM, Mr Flibble wrote:
    On Sat, 26 Jul 2025 14:26:27 -0500, olcott wrote:

    On 7/26/2025 1:30 PM, Alan Mackenzie wrote:
    In comp.theory olcott <polcott333@gmail.com> wrote:

    The error of all of the halting problem proofs is that they >>>>>>>>> require a
    Turing machine halt decider to report on the behavior of a
    directly
    executed Turing machine.

    It is common knowledge that no Turing machine decider can take >>>>>>>>> another
    directly executing Turing machine as an input, thus the above >>>>>>>>> requirement is not precisely correct.

    When we correct the error of this incorrect requirement it
    becomes a
    Turing machine decider indirectly reports on the behavior of a >>>>>>>>> directly executing Turing machine through the proxy of a finite >>>>>>>>> string
    description of this machine.

    Now I have proven and corrected the error of all of the halting >>>>>>>>> problem proofs.

    No you haven't, the subject matter is too far beyond your
    intellectual
    capacity.


    It only seems to you that I lack understanding because you are so >>>>>>> sure
    that I must be wrong that you make sure to totally ignore the subtle >>>>>>> nuances of meaning that proves I am correct.

    No Turing machine based (at least partial) halt decider can possibly >>>>>>> *directly* report on the behavior of any directly executing Turing >>>>>>> machine.  The best that any of them can possibly do is indirectly >>>>>>> report
    on this behavior through the proxy of a finite string machine
    description.

    Partial decidability is not a hard problem.

    /Flibble

    My point is that all of the halting problem proofs
    are wrong when they require a Turing machine decider
    H to report on the behavior of machine M on input i
    because machine M is not in the domain of any Turing
    machine decider. Only finite strings such as ⟨M⟩ the
    Turing machine description of machine M are its
    domain.


    Definition of Turing Machine Ĥ
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞,
       if Ĥ applied to ⟨Ĥ⟩ halts, and        // incorrect requirement
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
       if Ĥ applied to ⟨Ĥ⟩ does not halt.    // incorrect requirement

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩ ...

    The fact that the correctly simulated input
    specifies recursive simulation prevents the
    simulated ⟨Ĥ⟩ from ever reaching its simulated
    final halt state of ⟨Ĥ.qn⟩, thus specifies non-termination.

    This is not contradicted by the fact that
    Ĥ applied to ⟨Ĥ⟩ halts because Ĥ is outside of
    the domain of every Turing machine computed function.


    In the atypical case where the behavior of the simulation
    of an input to a potential halt decider disagrees with the
    behavior of the direct execution of the underlying machine
    (because this input calls this same simulating decider) it
    is the behavior of the input that rules because deciders
    compute the mapping *FROM* their inputs.

    But the input specifies halting behaviour,
    It never was the actual input that specifies non-halting
    behavior.

    Indeed. But HHH must decide on the actual input that specifies halting behaviour. Not on another hypothetical other input that specifies
    non-halting behaviour.
    Sum(2,3) must calculate the sum of the actual input, not of hypothetical
    other inputs.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Fred. Zwarts@F.Zwarts@HetNet.nl to comp.theory,sci.logic,comp.ai.philosophy on Tue Jul 29 10:04:31 2025
    From Newsgroup: comp.ai.philosophy

    Op 28.jul.2025 om 15:22 schreef olcott:
    On 7/28/2025 3:57 AM, Fred. Zwarts wrote:
    Op 27.jul.2025 om 01:43 schreef olcott:
    On 7/26/2025 6:35 PM, Richard Damon wrote:

    The behavior of an input to a halt decider is DEFINED in all cases
    to be the behavior of the machine the input represents,

    Yet I have conclusively proven otherwise and
    you are too stupid to understand the proof.

    That was not a proof, but an assumption with a huge mistake.


    https://www.researchgate.net/publication/394042683_ChatGPT_analyzes_HHHDDD

    ChatGPT agrees that HHH(DDD)==0 is correct even though
    DDD() halts.

    With your biased input chat-box can say anything without any relevance.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Fred. Zwarts@F.Zwarts@HetNet.nl to comp.theory,sci.logic,comp.ai.philosophy on Tue Jul 29 10:11:33 2025
    From Newsgroup: comp.ai.philosophy

    Op 29.jul.2025 om 04:57 schreef olcott:
    On 7/28/2025 9:36 PM, Richard Damon wrote:
    On 7/28/25 10:32 PM, olcott wrote:
    On 7/28/2025 9:00 PM, Richard Damon wrote:
    On 7/28/25 7:44 PM, olcott wrote:
    On 7/28/2025 6:26 PM, Richard Damon wrote:
    On 7/28/25 8:34 AM, olcott wrote:

    void DDD()
    {
       HHH(DDD);
       return;
    }

    That you are too stupid to understand that DDD simulated
    by HHH does call HHH in recursive emulation even after
    I have provided fully operational code of DDD calling
    HHH(DDD) in recursive emulation *IS NOT A REBUTTAL*

    https://github.com/plolcott/x86utm/blob/master/Halt7.c


    What you are too stupid to understand is that while the *PROGRAM* >>>>>> HHH, which does the specific actions it is defined to, when it
    simulates the input that represents the *PROGRAM* DDD, which by
    definition includes the code of the HHH that it is built on, that >>>>>> will not reach the final state.


    HHH correctly predicts that DDD correctly simulated
    by HHH cannot possibly reach its simulated "return"
    statement final halt state. This is because DDD does
    call HHH(DDD) in recursive simulation.

    Can't do that, as HHH doesn't correct simulate its input, since
    correct simulation requires being complete.


    Never heard of mathematical induction?



    You don't have a valid induction. The problem is every version of HHH
    gets a different version of DDD, so you can't build the induction, as
    the n and n+1 steps don't relate.


    The only difference in the elements of the infinite
    set of HHH/DDD pairs where HHH emulates N instructions
    of DDD cannot possibly have any effect on whether this
    DDD instance reaches its "return" instruction final
    halt state *AND YOU HAVE ALWAYS KNOWN THAT*


    As usual irrelevant claims, that do not make a rebuttal.
    Other world-class simulators prove that only one more cycle is needed to
    reach the final halt state for this input. This proves that HHH fails to recognise that there is only a *finite* recursion.
    That other simulators fail also with other inputs is not relevant for
    this input, the input with your DDD and your HHH.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Tue Jul 29 07:23:18 2025
    From Newsgroup: comp.ai.philosophy

    On 7/28/25 10:57 PM, olcott wrote:
    On 7/28/2025 9:36 PM, Richard Damon wrote:
    On 7/28/25 10:32 PM, olcott wrote:
    On 7/28/2025 9:00 PM, Richard Damon wrote:
    On 7/28/25 7:44 PM, olcott wrote:
    On 7/28/2025 6:26 PM, Richard Damon wrote:
    On 7/28/25 8:34 AM, olcott wrote:

    void DDD()
    {
       HHH(DDD);
       return;
    }

    That you are too stupid to understand that DDD simulated
    by HHH does call HHH in recursive emulation even after
    I have provided fully operational code of DDD calling
    HHH(DDD) in recursive emulation *IS NOT A REBUTTAL*

    https://github.com/plolcott/x86utm/blob/master/Halt7.c


    What you are too stupid to understand is that while the *PROGRAM* >>>>>> HHH, which does the specific actions it is defined to, when it
    simulates the input that represents the *PROGRAM* DDD, which by
    definition includes the code of the HHH that it is built on, that >>>>>> will not reach the final state.


    HHH correctly predicts that DDD correctly simulated
    by HHH cannot possibly reach its simulated "return"
    statement final halt state. This is because DDD does
    call HHH(DDD) in recursive simulation.

    Can't do that, as HHH doesn't correct simulate its input, since
    correct simulation requires being complete.


    Never heard of mathematical induction?



    You don't have a valid induction. The problem is every version of HHH
    gets a different version of DDD, so you can't build the induction, as
    the n and n+1 steps don't relate.


    The only difference in the elements of the infinite
    set of HHH/DDD pairs where HHH emulates N instructions
    of DDD cannot possibly have any effect on whether this
    DDD instance reaches its "return" instruction final
    halt state *AND YOU HAVE ALWAYS KNOWN THAT*



    Which *IS* a difference, and thus the DDD are different.

    Try to write those different HHH using IDENTICAL x86 code (including the
    data they refer to)

    You "logic" is that close to the same is just good enough to be
    considered the same.

    Note, it isn't "this DDD instance" as NONE of the other HHH are given
    *THIS* DDD, which to be emulatable at all, needs to include the code of
    the SPECIFIC HHH that it calls.

    Your "logic" just wants to ignore these sorts of problems, because they
    get in the way of you telling your liez.

    Sorry, all you are doing is proving that you are just a liar.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mr Flibble@flibble@red-dwarf.jmc.corp to comp.theory,sci.logic,comp.ai.philosophy on Tue Jul 29 15:44:03 2025
    From Newsgroup: comp.ai.philosophy

    On Mon, 28 Jul 2025 21:56:16 -0400, Richard Damon wrote:

    On 7/28/25 7:58 PM, olcott wrote:
    On 7/28/2025 6:49 PM, Richard Damon wrote:
    On 7/28/25 7:20 PM, olcott wrote:
    On 7/28/2025 5:57 PM, Richard Damon wrote:
    On 7/28/25 9:54 AM, olcott wrote:
    On 7/28/2025 8:21 AM, joes wrote:
    Am Mon, 28 Jul 2025 07:11:11 -0500 schrieb olcott:
    On 7/28/2025 2:30 AM, joes wrote:
    Am Sun, 27 Jul 2025 21:58:05 -0500 schrieb olcott:
    On 7/27/2025 9:48 PM, Richard Damon wrote:
    On 7/27/25 8:20 PM, olcott wrote:

    When DDD is emulated by HHH1 it need not emulate itself at >>>>>>>>>>>> all.
    But "itself" doesn't matter to x86 instructions,
    By itself I mean the exact same machine code bytes at the exact >>>>>>>>>> same machine address.
    Yeah, so when you change HHH to abort later, you also change >>>>>>>>> DDD.
    HHH is never changed.

    It is changed in the hypothetical unaborted simulation. HHH is
    reporting on UTM(HHH', DDD) where HHH' calls UTM(DDD), and not on >>>>>>> the halting DDD,
    and definitely not on HHH(DDD), itself.


    All halt deciders are required to predict the behavior of their
    input. HHH does correctly predict that DDD correctly simulated by
    HHH cannot possibly reach its own simulated "return" instruction
    final halt state.


    How is it a "correct prediction" if it sees something different than >>>>> what that DDD does.


    What DDD does is keep calling HHH(DDD) in recursive simulation until
    HHH kills this whole process.

    But the behavior of the program continues past that (something you
    don't seem to understand) and that behavior will also have its HHH
    terminate the DDD it is simulating and return 0 to DDD and then Halt.

    Your problem is you don't understand that the simulating HHH doesn't
    define the behavior of DDD, it is the execution of DDD that defines
    what a correct simulation of it is.


    Remember, to have simulated that DDD, it must have include the code
    of the HHH that it was based on, which is the HHH that made the
    prediction, and thus returns 0, so DDD will halt.


    We are not asking: Does DDD() halt.
    That is (as it turns out) an incorrect question.

    No, that is EXACTLY the question.

    I guess you are just admitting that you whole world is based on LYING
    about what things are supposed to be.


    Turing machines cannot directly report on the behavior of other
    Turing machines they can at best indirectly report on the behavior of
    Turing machines through the proxy of finite string machine
    descriptions such as ⟨M⟩.

    Right, and HHH was given the equivalenet of (M) by being given the
    code of *ALL* of DDD

    I guess you don't understand that fact, even though you CLAIM the
    input is the proper representation of DDD.


    Thus the behavior specified by the input finite string overrules and
    supersedes the behavior of the direct execution.

    No, it is DEFINED to be the behavior of the direct execution of the
    program it represent.


    *That has always been the fatal flaw of all of the proofs*

    No, your failure to follow the rules is what makes you just a liar.

    Yet another ad hominem attack!

    /Flibble
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Tue Jul 29 11:53:24 2025
    From Newsgroup: comp.ai.philosophy

    On 7/28/2025 8:56 PM, Richard Damon wrote:
    On 7/28/25 7:58 PM, olcott wrote:
    On 7/28/2025 6:49 PM, Richard Damon wrote:
    On 7/28/25 7:20 PM, olcott wrote:
    On 7/28/2025 5:57 PM, Richard Damon wrote:
    On 7/28/25 9:54 AM, olcott wrote:
    On 7/28/2025 8:21 AM, joes wrote:
    Am Mon, 28 Jul 2025 07:11:11 -0500 schrieb olcott:
    On 7/28/2025 2:30 AM, joes wrote:
    Am Sun, 27 Jul 2025 21:58:05 -0500 schrieb olcott:
    On 7/27/2025 9:48 PM, Richard Damon wrote:
    On 7/27/25 8:20 PM, olcott wrote:

    When DDD is emulated by HHH1 it need not emulate itself at all. >>>>>>>>>>> But "itself" doesn't matter to x86 instructions,
    By itself I mean the exact same machine code bytes at the >>>>>>>>>> exact same
    machine address.
    Yeah, so when you change HHH to abort later, you also change DDD. >>>>>>>> HHH is never changed.

    It is changed in the hypothetical unaborted simulation. HHH is
    reporting
    on UTM(HHH', DDD) where HHH' calls UTM(DDD), and not on the
    halting DDD,
    and definitely not on HHH(DDD), itself.


    All halt deciders are required to predict the behavior
    of their input. HHH does correctly predict that DDD correctly
    simulated by HHH cannot possibly reach its own simulated
    "return" instruction final halt state.


    How is it a "correct prediction" if it sees something different
    than what that DDD does.


    What DDD does is keep calling HHH(DDD) in recursive
    simulation until HHH kills this whole process.

    But the behavior of the program continues past that (something you
    don't seem to understand) and that behavior will also have its HHH
    terminate the DDD it is simulating and return 0 to DDD and then Halt.

    Your problem is you don't understand that the simulating HHH doesn't
    define the behavior of DDD, it is the execution of DDD that defines
    what a correct simulation of it is.


    Remember, to have simulated that DDD, it must have include the code >>>>> of the HHH that it was based on, which is the HHH that made the
    prediction, and thus returns 0, so DDD will halt.


    We are not asking: Does DDD() halt.
    That is (as it turns out) an incorrect question.

    No, that is EXACTLY the question.

    I guess you are just admitting that you whole world is based on LYING
    about what things are supposed to be.


    Turing machines cannot directly report on the behavior
    of other Turing machines they can at best indirectly
    report on the behavior of Turing machines through the
    proxy of finite string machine descriptions such as ⟨M⟩.

    Right, and HHH was given the equivalenet of (M) by being given the
    code of *ALL* of DDD

    I guess you don't understand that fact, even though you CLAIM the
    input is the proper representation of DDD.


    Thus the behavior specified by the input finite string
    overrules and supersedes the behavior of the direct
    execution.

    No, it is DEFINED to be the behavior of the direct execution of the
    program it represent.


    *That has always been the fatal flaw of all of the proofs*

    No, your failure to follow the rules is what makes you just a liar.


    _DDD()
    [00002192] 55 push ebp
    [00002193] 8bec mov ebp,esp
    [00002195] 6892210000 push 00002192 // push DDD
    [0000219a] e833f4ffff call 000015d2 // call HHH
    [0000219f] 83c404 add esp,+04
    [000021a2] 5d pop ebp
    [000021a3] c3 ret
    Size in bytes:(0018) [000021a3]

    When the above code is in the same memory space as HHH
    such that DDD calls HHH(DDD) and then HHH does emulate
    itself emulating DDD then this does specify recursive
    emulation.

    Anyone or anything that disagrees would be disagreeing
    with the definition of the x86 language.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Tue Jul 29 18:35:50 2025
    From Newsgroup: comp.ai.philosophy

    On 7/29/25 11:44 AM, Mr Flibble wrote:
    On Mon, 28 Jul 2025 21:56:16 -0400, Richard Damon wrote:

    On 7/28/25 7:58 PM, olcott wrote:
    On 7/28/2025 6:49 PM, Richard Damon wrote:
    On 7/28/25 7:20 PM, olcott wrote:
    On 7/28/2025 5:57 PM, Richard Damon wrote:
    On 7/28/25 9:54 AM, olcott wrote:
    On 7/28/2025 8:21 AM, joes wrote:
    Am Mon, 28 Jul 2025 07:11:11 -0500 schrieb olcott:
    On 7/28/2025 2:30 AM, joes wrote:
    Am Sun, 27 Jul 2025 21:58:05 -0500 schrieb olcott:
    On 7/27/2025 9:48 PM, Richard Damon wrote:
    On 7/27/25 8:20 PM, olcott wrote:

    When DDD is emulated by HHH1 it need not emulate itself at >>>>>>>>>>>>> all.
    But "itself" doesn't matter to x86 instructions,
    By itself I mean the exact same machine code bytes at the exact >>>>>>>>>>> same machine address.
    Yeah, so when you change HHH to abort later, you also change >>>>>>>>>> DDD.
    HHH is never changed.

    It is changed in the hypothetical unaborted simulation. HHH is >>>>>>>> reporting on UTM(HHH', DDD) where HHH' calls UTM(DDD), and not on >>>>>>>> the halting DDD,
    and definitely not on HHH(DDD), itself.


    All halt deciders are required to predict the behavior of their
    input. HHH does correctly predict that DDD correctly simulated by >>>>>>> HHH cannot possibly reach its own simulated "return" instruction >>>>>>> final halt state.


    How is it a "correct prediction" if it sees something different than >>>>>> what that DDD does.


    What DDD does is keep calling HHH(DDD) in recursive simulation until >>>>> HHH kills this whole process.

    But the behavior of the program continues past that (something you
    don't seem to understand) and that behavior will also have its HHH
    terminate the DDD it is simulating and return 0 to DDD and then Halt.

    Your problem is you don't understand that the simulating HHH doesn't
    define the behavior of DDD, it is the execution of DDD that defines
    what a correct simulation of it is.


    Remember, to have simulated that DDD, it must have include the code >>>>>> of the HHH that it was based on, which is the HHH that made the
    prediction, and thus returns 0, so DDD will halt.


    We are not asking: Does DDD() halt.
    That is (as it turns out) an incorrect question.

    No, that is EXACTLY the question.

    I guess you are just admitting that you whole world is based on LYING
    about what things are supposed to be.


    Turing machines cannot directly report on the behavior of other
    Turing machines they can at best indirectly report on the behavior of >>>>> Turing machines through the proxy of finite string machine
    descriptions such as ⟨M⟩.

    Right, and HHH was given the equivalenet of (M) by being given the
    code of *ALL* of DDD

    I guess you don't understand that fact, even though you CLAIM the
    input is the proper representation of DDD.


    Thus the behavior specified by the input finite string overrules and >>>>> supersedes the behavior of the direct execution.

    No, it is DEFINED to be the behavior of the direct execution of the
    program it represent.


    *That has always been the fatal flaw of all of the proofs*

    No, your failure to follow the rules is what makes you just a liar.

    Yet another ad hominem attack!

    /Flibble

    Nope, just a proof statement.

    Note, it doesn't say his statement is wrong because of something about himself. It is just apply the definition that claiming to follow the
    rules, and not doing so, is just a lie, and he who lies habitually is a
    liar.

    I guess youur problem is you are as bad at the meaning of words as Olcott.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Tue Jul 29 18:37:13 2025
    From Newsgroup: comp.ai.philosophy

    On 7/29/25 12:53 PM, olcott wrote:
    On 7/28/2025 8:56 PM, Richard Damon wrote:
    On 7/28/25 7:58 PM, olcott wrote:
    On 7/28/2025 6:49 PM, Richard Damon wrote:
    On 7/28/25 7:20 PM, olcott wrote:
    On 7/28/2025 5:57 PM, Richard Damon wrote:
    On 7/28/25 9:54 AM, olcott wrote:
    On 7/28/2025 8:21 AM, joes wrote:
    Am Mon, 28 Jul 2025 07:11:11 -0500 schrieb olcott:
    On 7/28/2025 2:30 AM, joes wrote:
    Am Sun, 27 Jul 2025 21:58:05 -0500 schrieb olcott:
    On 7/27/2025 9:48 PM, Richard Damon wrote:
    On 7/27/25 8:20 PM, olcott wrote:

    When DDD is emulated by HHH1 it need not emulate itself at >>>>>>>>>>>>> all.
    But "itself" doesn't matter to x86 instructions,
    By itself I mean the exact same machine code bytes at the >>>>>>>>>>> exact same
    machine address.
    Yeah, so when you change HHH to abort later, you also change DDD. >>>>>>>>> HHH is never changed.

    It is changed in the hypothetical unaborted simulation. HHH is >>>>>>>> reporting
    on UTM(HHH', DDD) where HHH' calls UTM(DDD), and not on the
    halting DDD,
    and definitely not on HHH(DDD), itself.


    All halt deciders are required to predict the behavior
    of their input. HHH does correctly predict that DDD correctly
    simulated by HHH cannot possibly reach its own simulated
    "return" instruction final halt state.


    How is it a "correct prediction" if it sees something different
    than what that DDD does.


    What DDD does is keep calling HHH(DDD) in recursive
    simulation until HHH kills this whole process.

    But the behavior of the program continues past that (something you
    don't seem to understand) and that behavior will also have its HHH
    terminate the DDD it is simulating and return 0 to DDD and then Halt.

    Your problem is you don't understand that the simulating HHH doesn't
    define the behavior of DDD, it is the execution of DDD that defines
    what a correct simulation of it is.


    Remember, to have simulated that DDD, it must have include the
    code of the HHH that it was based on, which is the HHH that made
    the prediction, and thus returns 0, so DDD will halt.


    We are not asking: Does DDD() halt.
    That is (as it turns out) an incorrect question.

    No, that is EXACTLY the question.

    I guess you are just admitting that you whole world is based on
    LYING about what things are supposed to be.


    Turing machines cannot directly report on the behavior
    of other Turing machines they can at best indirectly
    report on the behavior of Turing machines through the
    proxy of finite string machine descriptions such as ⟨M⟩.

    Right, and HHH was given the equivalenet of (M) by being given the
    code of *ALL* of DDD

    I guess you don't understand that fact, even though you CLAIM the
    input is the proper representation of DDD.


    Thus the behavior specified by the input finite string
    overrules and supersedes the behavior of the direct
    execution.

    No, it is DEFINED to be the behavior of the direct execution of the
    program it represent.


    *That has always been the fatal flaw of all of the proofs*

    No, your failure to follow the rules is what makes you just a liar.


    _DDD()
    [00002192] 55         push ebp
    [00002193] 8bec       mov ebp,esp
    [00002195] 6892210000 push 00002192  // push DDD
    [0000219a] e833f4ffff call 000015d2  // call HHH
    [0000219f] 83c404     add esp,+04
    [000021a2] 5d         pop ebp
    [000021a3] c3         ret
    Size in bytes:(0018) [000021a3]

    When the above code is in the same memory space as HHH
    such that DDD calls HHH(DDD) and then HHH does emulate
    itself emulating DDD then this does specify recursive
    emulation.

    Anyone or anything that disagrees would be disagreeing
    with the definition of the x86 language.


    So, if HHH accesses that memory, it becomes part of the input.

    All you are doing is establishing that you just don't know what you are talking about.

    Sorry, your ignorance does not make your claim true, uit just makes you
    a pathological liar.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Tue Jul 29 20:24:40 2025
    From Newsgroup: comp.ai.philosophy

    On 7/29/2025 5:37 PM, Richard Damon wrote:
    On 7/29/25 12:53 PM, olcott wrote:
    On 7/28/2025 8:56 PM, Richard Damon wrote:
    On 7/28/25 7:58 PM, olcott wrote:
    On 7/28/2025 6:49 PM, Richard Damon wrote:
    On 7/28/25 7:20 PM, olcott wrote:
    On 7/28/2025 5:57 PM, Richard Damon wrote:
    On 7/28/25 9:54 AM, olcott wrote:
    On 7/28/2025 8:21 AM, joes wrote:
    Am Mon, 28 Jul 2025 07:11:11 -0500 schrieb olcott:
    On 7/28/2025 2:30 AM, joes wrote:
    Am Sun, 27 Jul 2025 21:58:05 -0500 schrieb olcott:
    On 7/27/2025 9:48 PM, Richard Damon wrote:
    On 7/27/25 8:20 PM, olcott wrote:

    When DDD is emulated by HHH1 it need not emulate itself at >>>>>>>>>>>>>> all.
    But "itself" doesn't matter to x86 instructions,
    By itself I mean the exact same machine code bytes at the >>>>>>>>>>>> exact same
    machine address.
    Yeah, so when you change HHH to abort later, you also change >>>>>>>>>>> DDD.
    HHH is never changed.

    It is changed in the hypothetical unaborted simulation. HHH is >>>>>>>>> reporting
    on UTM(HHH', DDD) where HHH' calls UTM(DDD), and not on the >>>>>>>>> halting DDD,
    and definitely not on HHH(DDD), itself.


    All halt deciders are required to predict the behavior
    of their input. HHH does correctly predict that DDD correctly
    simulated by HHH cannot possibly reach its own simulated
    "return" instruction final halt state.


    How is it a "correct prediction" if it sees something different >>>>>>> than what that DDD does.


    What DDD does is keep calling HHH(DDD) in recursive
    simulation until HHH kills this whole process.

    But the behavior of the program continues past that (something you
    don't seem to understand) and that behavior will also have its HHH
    terminate the DDD it is simulating and return 0 to DDD and then Halt. >>>>>
    Your problem is you don't understand that the simulating HHH
    doesn't define the behavior of DDD, it is the execution of DDD that >>>>> defines what a correct simulation of it is.


    Remember, to have simulated that DDD, it must have include the
    code of the HHH that it was based on, which is the HHH that made >>>>>>> the prediction, and thus returns 0, so DDD will halt.


    We are not asking: Does DDD() halt.
    That is (as it turns out) an incorrect question.

    No, that is EXACTLY the question.

    I guess you are just admitting that you whole world is based on
    LYING about what things are supposed to be.


    Turing machines cannot directly report on the behavior
    of other Turing machines they can at best indirectly
    report on the behavior of Turing machines through the
    proxy of finite string machine descriptions such as ⟨M⟩.

    Right, and HHH was given the equivalenet of (M) by being given the
    code of *ALL* of DDD

    I guess you don't understand that fact, even though you CLAIM the
    input is the proper representation of DDD.


    Thus the behavior specified by the input finite string
    overrules and supersedes the behavior of the direct
    execution.

    No, it is DEFINED to be the behavior of the direct execution of the >>>>> program it represent.


    *That has always been the fatal flaw of all of the proofs*

    No, your failure to follow the rules is what makes you just a liar.


    _DDD()
    [00002192] 55         push ebp
    [00002193] 8bec       mov ebp,esp
    [00002195] 6892210000 push 00002192  // push DDD
    [0000219a] e833f4ffff call 000015d2  // call HHH
    [0000219f] 83c404     add esp,+04
    [000021a2] 5d         pop ebp
    [000021a3] c3         ret
    Size in bytes:(0018) [000021a3]

    When the above code is in the same memory space as HHH
    such that DDD calls HHH(DDD) and then HHH does emulate
    itself emulating DDD then this does specify recursive
    emulation.

    Anyone or anything that disagrees would be disagreeing
    with the definition of the x86 language.


    So, if HHH accesses that memory, it becomes part of the input.


    It becomes part of the input in the sense that the
    correct simulation of the input to HHH(DDD) is not
    the same as the correct simulation of the input to
    HHH1(DDD) because DDD only calls HHH(DDD) and does
    not call HHH1(DDD).

    DDD correctly simulated by HHH cannot possibly
    halt thus HHH(DDD)==0 is correct.

    DDD correctly simulated by HHH1 does halt thus
    HHH(DDD)==1 is correct.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Tue Jul 29 21:51:45 2025
    From Newsgroup: comp.ai.philosophy

    On 7/29/25 9:24 PM, olcott wrote:
    On 7/29/2025 5:37 PM, Richard Damon wrote:
    On 7/29/25 12:53 PM, olcott wrote:
    On 7/28/2025 8:56 PM, Richard Damon wrote:
    On 7/28/25 7:58 PM, olcott wrote:
    On 7/28/2025 6:49 PM, Richard Damon wrote:
    On 7/28/25 7:20 PM, olcott wrote:
    On 7/28/2025 5:57 PM, Richard Damon wrote:
    On 7/28/25 9:54 AM, olcott wrote:
    On 7/28/2025 8:21 AM, joes wrote:
    Am Mon, 28 Jul 2025 07:11:11 -0500 schrieb olcott:
    On 7/28/2025 2:30 AM, joes wrote:
    Am Sun, 27 Jul 2025 21:58:05 -0500 schrieb olcott:
    On 7/27/2025 9:48 PM, Richard Damon wrote:
    On 7/27/25 8:20 PM, olcott wrote:

    When DDD is emulated by HHH1 it need not emulate itself >>>>>>>>>>>>>>> at all.
    But "itself" doesn't matter to x86 instructions,
    By itself I mean the exact same machine code bytes at the >>>>>>>>>>>>> exact same
    machine address.
    Yeah, so when you change HHH to abort later, you also change >>>>>>>>>>>> DDD.
    HHH is never changed.

    It is changed in the hypothetical unaborted simulation. HHH is >>>>>>>>>> reporting
    on UTM(HHH', DDD) where HHH' calls UTM(DDD), and not on the >>>>>>>>>> halting DDD,
    and definitely not on HHH(DDD), itself.


    All halt deciders are required to predict the behavior
    of their input. HHH does correctly predict that DDD correctly >>>>>>>>> simulated by HHH cannot possibly reach its own simulated
    "return" instruction final halt state.


    How is it a "correct prediction" if it sees something different >>>>>>>> than what that DDD does.


    What DDD does is keep calling HHH(DDD) in recursive
    simulation until HHH kills this whole process.

    But the behavior of the program continues past that (something you >>>>>> don't seem to understand) and that behavior will also have its HHH >>>>>> terminate the DDD it is simulating and return 0 to DDD and then Halt. >>>>>>
    Your problem is you don't understand that the simulating HHH
    doesn't define the behavior of DDD, it is the execution of DDD
    that defines what a correct simulation of it is.


    Remember, to have simulated that DDD, it must have include the >>>>>>>> code of the HHH that it was based on, which is the HHH that made >>>>>>>> the prediction, and thus returns 0, so DDD will halt.


    We are not asking: Does DDD() halt.
    That is (as it turns out) an incorrect question.

    No, that is EXACTLY the question.

    I guess you are just admitting that you whole world is based on
    LYING about what things are supposed to be.


    Turing machines cannot directly report on the behavior
    of other Turing machines they can at best indirectly
    report on the behavior of Turing machines through the
    proxy of finite string machine descriptions such as ⟨M⟩.

    Right, and HHH was given the equivalenet of (M) by being given the >>>>>> code of *ALL* of DDD

    I guess you don't understand that fact, even though you CLAIM the >>>>>> input is the proper representation of DDD.


    Thus the behavior specified by the input finite string
    overrules and supersedes the behavior of the direct
    execution.

    No, it is DEFINED to be the behavior of the direct execution of
    the program it represent.


    *That has always been the fatal flaw of all of the proofs*

    No, your failure to follow the rules is what makes you just a liar.


    _DDD()
    [00002192] 55         push ebp
    [00002193] 8bec       mov ebp,esp
    [00002195] 6892210000 push 00002192  // push DDD
    [0000219a] e833f4ffff call 000015d2  // call HHH
    [0000219f] 83c404     add esp,+04
    [000021a2] 5d         pop ebp
    [000021a3] c3         ret
    Size in bytes:(0018) [000021a3]

    When the above code is in the same memory space as HHH
    such that DDD calls HHH(DDD) and then HHH does emulate
    itself emulating DDD then this does specify recursive
    emulation.

    Anyone or anything that disagrees would be disagreeing
    with the definition of the x86 language.


    So, if HHH accesses that memory, it becomes part of the input.


    It becomes part of the input in the sense that the
    correct simulation of the input to HHH(DDD) is not
    the same as the correct simulation of the input to
    HHH1(DDD) because DDD only calls HHH(DDD) and does
    not call HHH1(DDD).

    DDD correctly simulated by HHH cannot possibly
    halt thus HHH(DDD)==0 is correct.

    DDD correctly simulated by HHH1 does halt thus
    HHH(DDD)==1 is correct.



    It either *IS* or it *ISN'T* there is no middle.

    The "correct simulation" of a piece of code depends only on that code,
    and its full input. IT doesn't matter whether HHH or HHH1 is simulating it.


    If you wish to disagree, what is the first x86 instruction, properly
    simualted per the x86 language specification that differs in behavior
    between those two simulations?

    Your inability to answer that just proves you are lying.

    You are just showing that you don't understand the meaning of the words
    you use, and just establish that you are just a pathological liar.


    --- Synchronet 3.21a-Linux NewsLink 1.2