• Succinct rebuttal to the Linz halting problem proof.

    From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Mon Aug 4 13:29:04 2025
    From Newsgroup: comp.ai.philosophy

    Diagonalization only arises when one assumes that a
    Turing machine decider must report on its own behavior
    instead of the behavior specified by its machine description.

    Everyone assumes that these must always be the same.
    That assumption is proven to be incorrect.

    When one assumes a halt decider based on a UTM then
    the simulated input remains stuck in recursive simulation
    never reaching simulated states ⟨Ĥ.∞⟩ or ⟨Ĥ.qn⟩.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    on and on never reaching any simulated final state of ⟨Ĥ.qn⟩

    When embedded_H aborts its simulation and transitions to Ĥ.qn
    on the basis that its simulated input cannot possibly reach its own
    simulated final halt state of ⟨Ĥ.qn⟩ embedded_H is correct.

    This causes embedded_H itself to halt, thus contradicting its result
    *only if a Turing machine decider can be applied to its actual self*
    and not merely its own machine description.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mr Flibble@flibble@red-dwarf.jmc.corp to comp.theory,sci.logic,comp.ai.philosophy on Mon Aug 4 22:34:14 2025
    From Newsgroup: comp.ai.philosophy

    On Mon, 04 Aug 2025 13:29:04 -0500, olcott wrote:

    Diagonalization only arises when one assumes that a Turing machine
    decider must report on its own behavior instead of the behavior
    specified by its machine description.

    Everyone assumes that these must always be the same.
    That assumption is proven to be incorrect.

    When one assumes a halt decider based on a UTM then the simulated input remains stuck in recursive simulation never reaching simulated states ⟨Ĥ.∞⟩ or ⟨Ĥ.qn⟩.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞ Ĥ.q0 ⟨Ĥ⟩ ⊢*
    Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    ⊢* Ĥ.qn

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    on and on never reaching any simulated final state of ⟨Ĥ.qn⟩

    When embedded_H aborts its simulation and transitions to Ĥ.qn on the
    basis that its simulated input cannot possibly reach its own simulated
    final halt state of ⟨Ĥ.qn⟩ embedded_H is correct.

    This causes embedded_H itself to halt, thus contradicting its result
    *only if a Turing machine decider can be applied to its actual self* and
    not merely its own machine description.

    Your Ĥ is not a halt decider as defined by the Halting Problem so has
    nothing to do with the Halting Problem.

    /Flibble
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Mon Aug 4 17:42:24 2025
    From Newsgroup: comp.ai.philosophy

    On 8/4/2025 5:34 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 13:29:04 -0500, olcott wrote:

    Diagonalization only arises when one assumes that a Turing machine
    decider must report on its own behavior instead of the behavior
    specified by its machine description.

    Everyone assumes that these must always be the same.
    That assumption is proven to be incorrect.

    When one assumes a halt decider based on a UTM then the simulated input
    remains stuck in recursive simulation never reaching simulated states
    ⟨Ĥ.∞⟩ or ⟨Ĥ.qn⟩.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞ Ĥ.q0 ⟨Ĥ⟩ ⊢*
    Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    ⊢* Ĥ.qn

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    on and on never reaching any simulated final state of ⟨Ĥ.qn⟩

    When embedded_H aborts its simulation and transitions to Ĥ.qn on the
    basis that its simulated input cannot possibly reach its own simulated
    final halt state of ⟨Ĥ.qn⟩ embedded_H is correct.

    This causes embedded_H itself to halt, thus contradicting its result
    *only if a Turing machine decider can be applied to its actual self* and
    not merely its own machine description.

    Your Ĥ is not a halt decider as defined by the Halting Problem so has nothing to do with the Halting Problem.

    /Flibble

    You have this part incorrectly. Ask Richard because
    of what he explained to you the other night he may
    correct you on this.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mr Flibble@flibble@red-dwarf.jmc.corp to comp.theory,sci.logic,comp.ai.philosophy on Mon Aug 4 22:44:35 2025
    From Newsgroup: comp.ai.philosophy

    On Mon, 04 Aug 2025 17:42:24 -0500, olcott wrote:

    On 8/4/2025 5:34 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 13:29:04 -0500, olcott wrote:

    Diagonalization only arises when one assumes that a Turing machine
    decider must report on its own behavior instead of the behavior
    specified by its machine description.

    Everyone assumes that these must always be the same.
    That assumption is proven to be incorrect.

    When one assumes a halt decider based on a UTM then the simulated
    input remains stuck in recursive simulation never reaching simulated
    states ⟨Ĥ.∞⟩ or ⟨Ĥ.qn⟩.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞ Ĥ.q0 ⟨Ĥ⟩ ⊢*
    Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    ⊢* Ĥ.qn

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    on and on never reaching any simulated final state of ⟨Ĥ.qn⟩

    When embedded_H aborts its simulation and transitions to Ĥ.qn on the
    basis that its simulated input cannot possibly reach its own simulated
    final halt state of ⟨Ĥ.qn⟩ embedded_H is correct.

    This causes embedded_H itself to halt, thus contradicting its result
    *only if a Turing machine decider can be applied to its actual self*
    and not merely its own machine description.

    Your Ĥ is not a halt decider as defined by the Halting Problem so has
    nothing to do with the Halting Problem.

    /Flibble

    You have this part incorrectly. Ask Richard because of what he explained
    to you the other night he may correct you on this.

    No, your halt decider is a partial decider, Halting Problem deciders are
    total not partial.

    /Flibble
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Mon Aug 4 17:57:30 2025
    From Newsgroup: comp.ai.philosophy

    On 8/4/2025 5:44 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 17:42:24 -0500, olcott wrote:

    On 8/4/2025 5:34 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 13:29:04 -0500, olcott wrote:

    Diagonalization only arises when one assumes that a Turing machine
    decider must report on its own behavior instead of the behavior
    specified by its machine description.

    Everyone assumes that these must always be the same.
    That assumption is proven to be incorrect.

    When one assumes a halt decider based on a UTM then the simulated
    input remains stuck in recursive simulation never reaching simulated
    states ⟨Ĥ.∞⟩ or ⟨Ĥ.qn⟩.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞ Ĥ.q0 ⟨Ĥ⟩ ⊢*
    Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    ⊢* Ĥ.qn

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    on and on never reaching any simulated final state of ⟨Ĥ.qn⟩

    When embedded_H aborts its simulation and transitions to Ĥ.qn on the
    basis that its simulated input cannot possibly reach its own simulated >>>> final halt state of ⟨Ĥ.qn⟩ embedded_H is correct.

    This causes embedded_H itself to halt, thus contradicting its result
    *only if a Turing machine decider can be applied to its actual self*
    and not merely its own machine description.

    Your Ĥ is not a halt decider as defined by the Halting Problem so has
    nothing to do with the Halting Problem.

    /Flibble

    You have this part incorrectly. Ask Richard because of what he explained
    to you the other night he may correct you on this.

    No, your halt decider is a partial decider, Halting Problem deciders are total not partial.

    /Flibble

    Not exactly. The HP proofs attempt to prove that
    no total halt decider exists on the basis of one
    self-referential input cannot be decided by any
    decider including partial deciders.

    The technical term "decider" does not mean its
    conventional meaning of one who decides. It means
    an infallible Turing machine that always decides
    correctly. Since this is too misleading for most
    people I used "termination analyzer".
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mr Flibble@flibble@red-dwarf.jmc.corp to comp.theory,sci.logic,comp.ai.philosophy on Mon Aug 4 23:04:07 2025
    From Newsgroup: comp.ai.philosophy

    On Mon, 04 Aug 2025 17:57:30 -0500, olcott wrote:

    On 8/4/2025 5:44 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 17:42:24 -0500, olcott wrote:

    On 8/4/2025 5:34 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 13:29:04 -0500, olcott wrote:

    Diagonalization only arises when one assumes that a Turing machine
    decider must report on its own behavior instead of the behavior
    specified by its machine description.

    Everyone assumes that these must always be the same.
    That assumption is proven to be incorrect.

    When one assumes a halt decider based on a UTM then the simulated
    input remains stuck in recursive simulation never reaching simulated >>>>> states ⟨Ĥ.∞⟩ or ⟨Ĥ.qn⟩.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞ Ĥ.q0 ⟨Ĥ⟩ ⊢*
    Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    ⊢* Ĥ.qn

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ >>>>> (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    on and on never reaching any simulated final state of ⟨Ĥ.qn⟩

    When embedded_H aborts its simulation and transitions to Ĥ.qn on the >>>>> basis that its simulated input cannot possibly reach its own
    simulated final halt state of ⟨Ĥ.qn⟩ embedded_H is correct.

    This causes embedded_H itself to halt, thus contradicting its result >>>>> *only if a Turing machine decider can be applied to its actual self* >>>>> and not merely its own machine description.

    Your Ĥ is not a halt decider as defined by the Halting Problem so has >>>> nothing to do with the Halting Problem.

    /Flibble

    You have this part incorrectly. Ask Richard because of what he
    explained to you the other night he may correct you on this.

    No, your halt decider is a partial decider, Halting Problem deciders
    are total not partial.

    /Flibble

    Not exactly. The HP proofs attempt to prove that no total halt decider
    exists on the basis of one self-referential input cannot be decided by
    any decider including partial deciders.

    Wrong. Partial deciders have nothing to do with the Halting Problem.


    The technical term "decider" does not mean its conventional meaning of
    one who decides. It means an infallible Turing machine that always
    decides correctly. Since this is too misleading for most people I used "termination analyzer".

    Halting deciders and termination analyzers are different things and you do
    not get to redefine terms to suit your bogus argument.

    /Flibble
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Mon Aug 4 18:21:13 2025
    From Newsgroup: comp.ai.philosophy

    On 8/4/2025 6:04 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 17:57:30 -0500, olcott wrote:

    On 8/4/2025 5:44 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 17:42:24 -0500, olcott wrote:

    On 8/4/2025 5:34 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 13:29:04 -0500, olcott wrote:

    Diagonalization only arises when one assumes that a Turing machine >>>>>> decider must report on its own behavior instead of the behavior
    specified by its machine description.

    Everyone assumes that these must always be the same.
    That assumption is proven to be incorrect.

    When one assumes a halt decider based on a UTM then the simulated
    input remains stuck in recursive simulation never reaching simulated >>>>>> states ⟨Ĥ.∞⟩ or ⟨Ĥ.qn⟩.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞ Ĥ.q0 ⟨Ĥ⟩ ⊢*
    Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    ⊢* Ĥ.qn

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ >>>>>> (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    on and on never reaching any simulated final state of ⟨Ĥ.qn⟩

    When embedded_H aborts its simulation and transitions to Ĥ.qn on the >>>>>> basis that its simulated input cannot possibly reach its own
    simulated final halt state of ⟨Ĥ.qn⟩ embedded_H is correct.

    This causes embedded_H itself to halt, thus contradicting its result >>>>>> *only if a Turing machine decider can be applied to its actual self* >>>>>> and not merely its own machine description.

    Your Ĥ is not a halt decider as defined by the Halting Problem so has >>>>> nothing to do with the Halting Problem.

    /Flibble

    You have this part incorrectly. Ask Richard because of what he
    explained to you the other night he may correct you on this.

    No, your halt decider is a partial decider, Halting Problem deciders
    are total not partial.

    /Flibble

    Not exactly. The HP proofs attempt to prove that no total halt decider
    exists on the basis of one self-referential input cannot be decided by
    any decider including partial deciders.

    Wrong. Partial deciders have nothing to do with the Halting Problem.


    The technical term "decider" does not mean its conventional meaning of
    one who decides. It means an infallible Turing machine that always
    decides correctly. Since this is too misleading for most people I used
    "termination analyzer".

    Halting deciders and termination analyzers are different things and you do not get to redefine terms to suit your bogus argument.

    /Flibble

    I am using the term: "termination analyzer" that is not
    misleading at all in place of the clumsy and confusing
    term "partial halt decider".
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mr Flibble@flibble@red-dwarf.jmc.corp to comp.theory,sci.logic,comp.ai.philosophy on Mon Aug 4 23:29:17 2025
    From Newsgroup: comp.ai.philosophy

    On Mon, 04 Aug 2025 18:21:13 -0500, olcott wrote:

    On 8/4/2025 6:04 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 17:57:30 -0500, olcott wrote:

    On 8/4/2025 5:44 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 17:42:24 -0500, olcott wrote:

    On 8/4/2025 5:34 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 13:29:04 -0500, olcott wrote:

    Diagonalization only arises when one assumes that a Turing machine >>>>>>> decider must report on its own behavior instead of the behavior
    specified by its machine description.

    Everyone assumes that these must always be the same.
    That assumption is proven to be incorrect.

    When one assumes a halt decider based on a UTM then the simulated >>>>>>> input remains stuck in recursive simulation never reaching
    simulated states ⟨Ĥ.∞⟩ or ⟨Ĥ.qn⟩.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞ Ĥ.q0 ⟨Ĥ⟩ ⊢*
    Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    ⊢* Ĥ.qn

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ >>>>>>> (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    on and on never reaching any simulated final state of ⟨Ĥ.qn⟩ >>>>>>>
    When embedded_H aborts its simulation and transitions to Ĥ.qn on >>>>>>> the basis that its simulated input cannot possibly reach its own >>>>>>> simulated final halt state of ⟨Ĥ.qn⟩ embedded_H is correct. >>>>>>>
    This causes embedded_H itself to halt, thus contradicting its
    result *only if a Turing machine decider can be applied to its
    actual self*
    and not merely its own machine description.

    Your Ĥ is not a halt decider as defined by the Halting Problem so >>>>>> has nothing to do with the Halting Problem.

    /Flibble

    You have this part incorrectly. Ask Richard because of what he
    explained to you the other night he may correct you on this.

    No, your halt decider is a partial decider, Halting Problem deciders
    are total not partial.

    /Flibble

    Not exactly. The HP proofs attempt to prove that no total halt decider
    exists on the basis of one self-referential input cannot be decided by
    any decider including partial deciders.

    Wrong. Partial deciders have nothing to do with the Halting Problem.


    The technical term "decider" does not mean its conventional meaning of
    one who decides. It means an infallible Turing machine that always
    decides correctly. Since this is too misleading for most people I used
    "termination analyzer".

    Halting deciders and termination analyzers are different things and you
    do not get to redefine terms to suit your bogus argument.

    /Flibble

    I am using the term: "termination analyzer" that is not misleading at
    all in place of the clumsy and confusing term "partial halt decider".

    Whilst the two terms are interchangeable it doesn't alter the fact that neither term is related to the Halting Problem which is only concerned
    with *TOTAL* HALT DECIDERS.

    /Flibble
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Mon Aug 4 18:41:40 2025
    From Newsgroup: comp.ai.philosophy

    On 8/4/2025 6:29 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 18:21:13 -0500, olcott wrote:

    On 8/4/2025 6:04 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 17:57:30 -0500, olcott wrote:

    On 8/4/2025 5:44 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 17:42:24 -0500, olcott wrote:

    On 8/4/2025 5:34 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 13:29:04 -0500, olcott wrote:

    Diagonalization only arises when one assumes that a Turing machine >>>>>>>> decider must report on its own behavior instead of the behavior >>>>>>>> specified by its machine description.

    Everyone assumes that these must always be the same.
    That assumption is proven to be incorrect.

    When one assumes a halt decider based on a UTM then the simulated >>>>>>>> input remains stuck in recursive simulation never reaching
    simulated states ⟨Ĥ.∞⟩ or ⟨Ĥ.qn⟩.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞ Ĥ.q0 ⟨Ĥ⟩ ⊢*
    Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    ⊢* Ĥ.qn

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ >>>>>>>> (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    on and on never reaching any simulated final state of ⟨Ĥ.qn⟩ >>>>>>>>
    When embedded_H aborts its simulation and transitions to Ĥ.qn on >>>>>>>> the basis that its simulated input cannot possibly reach its own >>>>>>>> simulated final halt state of ⟨Ĥ.qn⟩ embedded_H is correct. >>>>>>>>
    This causes embedded_H itself to halt, thus contradicting its
    result *only if a Turing machine decider can be applied to its >>>>>>>> actual self*
    and not merely its own machine description.

    Your Ĥ is not a halt decider as defined by the Halting Problem so >>>>>>> has nothing to do with the Halting Problem.

    /Flibble

    You have this part incorrectly. Ask Richard because of what he
    explained to you the other night he may correct you on this.

    No, your halt decider is a partial decider, Halting Problem deciders >>>>> are total not partial.

    /Flibble

    Not exactly. The HP proofs attempt to prove that no total halt decider >>>> exists on the basis of one self-referential input cannot be decided by >>>> any decider including partial deciders.

    Wrong. Partial deciders have nothing to do with the Halting Problem.


    The technical term "decider" does not mean its conventional meaning of >>>> one who decides. It means an infallible Turing machine that always
    decides correctly. Since this is too misleading for most people I used >>>> "termination analyzer".

    Halting deciders and termination analyzers are different things and you
    do not get to redefine terms to suit your bogus argument.

    /Flibble

    I am using the term: "termination analyzer" that is not misleading at
    all in place of the clumsy and confusing term "partial halt decider".

    Whilst the two terms are interchangeable it doesn't alter the fact that neither term is related to the Halting Problem which is only concerned
    with *TOTAL* HALT DECIDERS.

    /Flibble

    You keep missing a subtle nuance.
    The HP presumes that is proves that no total halt decider
    exists on the basis that it believes that it has found an
    input that no (total or partial) decider can possibly
    analyze correctly. Try asking any of the chatbots.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mr Flibble@flibble@red-dwarf.jmc.corp to comp.theory,sci.logic,comp.ai.philosophy on Mon Aug 4 23:54:26 2025
    From Newsgroup: comp.ai.philosophy

    On Mon, 04 Aug 2025 18:41:40 -0500, olcott wrote:

    On 8/4/2025 6:29 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 18:21:13 -0500, olcott wrote:

    On 8/4/2025 6:04 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 17:57:30 -0500, olcott wrote:

    On 8/4/2025 5:44 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 17:42:24 -0500, olcott wrote:

    On 8/4/2025 5:34 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 13:29:04 -0500, olcott wrote:

    Diagonalization only arises when one assumes that a Turing
    machine decider must report on its own behavior instead of the >>>>>>>>> behavior specified by its machine description.

    Everyone assumes that these must always be the same.
    That assumption is proven to be incorrect.

    When one assumes a halt decider based on a UTM then the
    simulated input remains stuck in recursive simulation never
    reaching simulated states ⟨Ĥ.∞⟩ or ⟨Ĥ.qn⟩.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞ Ĥ.q0 ⟨Ĥ⟩ ⊢*
    Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    ⊢* Ĥ.qn

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ >>>>>>>>> (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    on and on never reaching any simulated final state of ⟨Ĥ.qn⟩ >>>>>>>>>
    When embedded_H aborts its simulation and transitions to Ĥ.qn on >>>>>>>>> the basis that its simulated input cannot possibly reach its own >>>>>>>>> simulated final halt state of ⟨Ĥ.qn⟩ embedded_H is correct. >>>>>>>>>
    This causes embedded_H itself to halt, thus contradicting its >>>>>>>>> result *only if a Turing machine decider can be applied to its >>>>>>>>> actual self*
    and not merely its own machine description.

    Your Ĥ is not a halt decider as defined by the Halting Problem so >>>>>>>> has nothing to do with the Halting Problem.

    /Flibble

    You have this part incorrectly. Ask Richard because of what he
    explained to you the other night he may correct you on this.

    No, your halt decider is a partial decider, Halting Problem
    deciders are total not partial.

    /Flibble

    Not exactly. The HP proofs attempt to prove that no total halt
    decider exists on the basis of one self-referential input cannot be
    decided by any decider including partial deciders.

    Wrong. Partial deciders have nothing to do with the Halting Problem.


    The technical term "decider" does not mean its conventional meaning
    of one who decides. It means an infallible Turing machine that
    always decides correctly. Since this is too misleading for most
    people I used "termination analyzer".

    Halting deciders and termination analyzers are different things and
    you do not get to redefine terms to suit your bogus argument.

    /Flibble

    I am using the term: "termination analyzer" that is not misleading at
    all in place of the clumsy and confusing term "partial halt decider".

    Whilst the two terms are interchangeable it doesn't alter the fact that
    neither term is related to the Halting Problem which is only concerned
    with *TOTAL* HALT DECIDERS.

    /Flibble

    You keep missing a subtle nuance.
    The HP presumes that is proves that no total halt decider exists on the
    basis that it believes that it has found an input that no (total or
    partial) decider can possibly analyze correctly. Try asking any of the chatbots.

    I am not missing any nuance, you are.

    Even though no total halt decider exists (as proven by the Halting Problem proofs) it is the case that the Halting Problem is only concerned with
    total halt deciders NOT partial halt deciders (aka termination analyzers).

    /Flibble
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Mon Aug 4 19:02:25 2025
    From Newsgroup: comp.ai.philosophy

    On 8/4/2025 6:54 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 18:41:40 -0500, olcott wrote:

    On 8/4/2025 6:29 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 18:21:13 -0500, olcott wrote:

    On 8/4/2025 6:04 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 17:57:30 -0500, olcott wrote:

    On 8/4/2025 5:44 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 17:42:24 -0500, olcott wrote:

    On 8/4/2025 5:34 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 13:29:04 -0500, olcott wrote:

    Diagonalization only arises when one assumes that a Turing >>>>>>>>>> machine decider must report on its own behavior instead of the >>>>>>>>>> behavior specified by its machine description.

    Everyone assumes that these must always be the same.
    That assumption is proven to be incorrect.

    When one assumes a halt decider based on a UTM then the
    simulated input remains stuck in recursive simulation never >>>>>>>>>> reaching simulated states ⟨Ĥ.∞⟩ or ⟨Ĥ.qn⟩.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞ Ĥ.q0 ⟨Ĥ⟩ ⊢*
    Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    ⊢* Ĥ.qn

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    on and on never reaching any simulated final state of ⟨Ĥ.qn⟩ >>>>>>>>>>
    When embedded_H aborts its simulation and transitions to Ĥ.qn on >>>>>>>>>> the basis that its simulated input cannot possibly reach its own >>>>>>>>>> simulated final halt state of ⟨Ĥ.qn⟩ embedded_H is correct. >>>>>>>>>>
    This causes embedded_H itself to halt, thus contradicting its >>>>>>>>>> result *only if a Turing machine decider can be applied to its >>>>>>>>>> actual self*
    and not merely its own machine description.

    Your Ĥ is not a halt decider as defined by the Halting Problem so >>>>>>>>> has nothing to do with the Halting Problem.

    /Flibble

    You have this part incorrectly. Ask Richard because of what he >>>>>>>> explained to you the other night he may correct you on this.

    No, your halt decider is a partial decider, Halting Problem
    deciders are total not partial.

    /Flibble

    Not exactly. The HP proofs attempt to prove that no total halt
    decider exists on the basis of one self-referential input cannot be >>>>>> decided by any decider including partial deciders.

    Wrong. Partial deciders have nothing to do with the Halting Problem. >>>>>

    The technical term "decider" does not mean its conventional meaning >>>>>> of one who decides. It means an infallible Turing machine that
    always decides correctly. Since this is too misleading for most
    people I used "termination analyzer".

    Halting deciders and termination analyzers are different things and
    you do not get to redefine terms to suit your bogus argument.

    /Flibble

    I am using the term: "termination analyzer" that is not misleading at
    all in place of the clumsy and confusing term "partial halt decider".

    Whilst the two terms are interchangeable it doesn't alter the fact that
    neither term is related to the Halting Problem which is only concerned
    with *TOTAL* HALT DECIDERS.

    /Flibble

    You keep missing a subtle nuance.
    The HP presumes that is proves that no total halt decider exists on the
    basis that it believes that it has found an input that no (total or
    partial) decider can possibly analyze correctly. Try asking any of the
    chatbots.

    I am not missing any nuance, you are.

    Even though no total halt decider exists (as proven by the Halting Problem proofs) it is the case that the Halting Problem is only concerned with
    total halt deciders NOT partial halt deciders (aka termination analyzers).

    /Flibble

    I may be incorrect yet I no longer believe
    that you are sincere about this.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mr Flibble@flibble@red-dwarf.jmc.corp to comp.theory,sci.logic,comp.ai.philosophy on Tue Aug 5 00:24:10 2025
    From Newsgroup: comp.ai.philosophy

    On Mon, 04 Aug 2025 19:02:25 -0500, olcott wrote:

    On 8/4/2025 6:54 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 18:41:40 -0500, olcott wrote:

    On 8/4/2025 6:29 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 18:21:13 -0500, olcott wrote:

    On 8/4/2025 6:04 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 17:57:30 -0500, olcott wrote:

    On 8/4/2025 5:44 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 17:42:24 -0500, olcott wrote:

    On 8/4/2025 5:34 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 13:29:04 -0500, olcott wrote:

    Diagonalization only arises when one assumes that a Turing >>>>>>>>>>> machine decider must report on its own behavior instead of the >>>>>>>>>>> behavior specified by its machine description.

    Everyone assumes that these must always be the same.
    That assumption is proven to be incorrect.

    When one assumes a halt decider based on a UTM then the
    simulated input remains stuck in recursive simulation never >>>>>>>>>>> reaching simulated states ⟨Ĥ.∞⟩ or ⟨Ĥ.qn⟩.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞ Ĥ.q0 ⟨Ĥ⟩
    ⊢*
    Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    ⊢* Ĥ.qn

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    on and on never reaching any simulated final state of
    ⟨Ĥ.qn⟩

    When embedded_H aborts its simulation and transitions to Ĥ.qn >>>>>>>>>>> on the basis that its simulated input cannot possibly reach >>>>>>>>>>> its own simulated final halt state of ⟨Ĥ.qn⟩ embedded_H is >>>>>>>>>>> correct.

    This causes embedded_H itself to halt, thus contradicting its >>>>>>>>>>> result *only if a Turing machine decider can be applied to its >>>>>>>>>>> actual self*
    and not merely its own machine description.

    Your Ĥ is not a halt decider as defined by the Halting Problem >>>>>>>>>> so has nothing to do with the Halting Problem.

    /Flibble

    You have this part incorrectly. Ask Richard because of what he >>>>>>>>> explained to you the other night he may correct you on this.

    No, your halt decider is a partial decider, Halting Problem
    deciders are total not partial.

    /Flibble

    Not exactly. The HP proofs attempt to prove that no total halt
    decider exists on the basis of one self-referential input cannot >>>>>>> be decided by any decider including partial deciders.

    Wrong. Partial deciders have nothing to do with the Halting
    Problem.


    The technical term "decider" does not mean its conventional
    meaning of one who decides. It means an infallible Turing machine >>>>>>> that always decides correctly. Since this is too misleading for
    most people I used "termination analyzer".

    Halting deciders and termination analyzers are different things and >>>>>> you do not get to redefine terms to suit your bogus argument.

    /Flibble

    I am using the term: "termination analyzer" that is not misleading
    at all in place of the clumsy and confusing term "partial halt
    decider".

    Whilst the two terms are interchangeable it doesn't alter the fact
    that neither term is related to the Halting Problem which is only
    concerned with *TOTAL* HALT DECIDERS.

    /Flibble

    You keep missing a subtle nuance.
    The HP presumes that is proves that no total halt decider exists on
    the basis that it believes that it has found an input that no (total
    or partial) decider can possibly analyze correctly. Try asking any of
    the chatbots.

    I am not missing any nuance, you are.

    Even though no total halt decider exists (as proven by the Halting
    Problem proofs) it is the case that the Halting Problem is only
    concerned with total halt deciders NOT partial halt deciders (aka
    termination analyzers).

    /Flibble

    I may be incorrect yet I no longer believe that you are sincere about
    this.

    What you believe is mostly irrelevant to others including me.

    /Flibble
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Mon Aug 4 21:16:44 2025
    From Newsgroup: comp.ai.philosophy

    On 8/4/25 2:29 PM, olcott wrote:
    Diagonalization only arises when one assumes that a
    Turing machine decider must report on its own behavior
    instead of the behavior specified by its machine description.

    But, if it IS a halt decider, and has been given a description of
    itself, then by the DEFINITION of the problem, it needs to report on the behavior of the program the input describes, whcih is itself.


    Everyone assumes that these must always be the same.
    That assumption is proven to be incorrect.

    Because that is the DEFINITION.


    When one assumes a halt decider based on a UTM then
    the simulated input remains stuck in recursive simulation
    never reaching simulated states ⟨Ĥ.∞⟩ or ⟨Ĥ.qn⟩.

    It CAN'T BE, as they are contradictory.

    A UTM given the representation of a non-halting program, must run forever.

    A Halt Decider isn't allowed to do that.

    The Halt Decider might start with the code of a UTM, but it has been
    modified to no longer BE a UTM, but then its simulation isn't the
    simulaiton of a UTM.


    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    on and on never reaching any simulated final state of ⟨Ĥ.qn⟩

    And if that *IS* the behavior of H, it fails to be a halt decider, and
    neither H or embedded_H Is ALLOWED to abort, as you just said it doesn't abort.

    That, or you just lied about the actual simulation that is being done.

    If embedded_H will abort its simulation, then the chain doesn't go on
    forever but the embedded_H started i (c) *WILL* abort its simulation one
    cycle after this simulation was aborted by H.

    Remember, the behavior of the input is the unmodifed behavior (not
    aborted) of that input, and thus DOES continue past the abort, it is
    just that the simulator doesn't know what happens after it stopped.


    When embedded_H aborts its simulation and transitions to Ĥ.qn
    on the basis that its simulated input cannot possibly reach its own
    simulated final halt state of ⟨Ĥ.qn⟩ embedded_H is correct.

    But a correct simulation of that input, *WILL* reach a final state,
    because it will simulate that additonal loop.

    Your problem is you confuse the behavior of the simulation, with the
    behavior of the input being simulated. The first gets aborted and stops,
    while the second continues to the final state, as it is defined to be
    the behavior if it wasn't aborted, so you need to ignore the aborting
    that was done.


    This causes embedded_H itself to halt, thus contradicting its result
    *only if a Turing machine decider can be applied to its actual self*
    and not merely its own machine description.


    Nope,. just shows you are using a wrong definition of the "machine description". It seem you think that aborting your simulation of that
    changes its behavior, which is just the opposite of what you just
    claimed as your reason for being able to abort it.

    All you are doing is showing you are just a stupid liar.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Mon Aug 4 21:18:14 2025
    From Newsgroup: comp.ai.philosophy

    On 8/4/25 6:42 PM, olcott wrote:
    On 8/4/2025 5:34 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 13:29:04 -0500, olcott wrote:

    Diagonalization only arises when one assumes that a Turing machine
    decider must report on its own behavior instead of the behavior
    specified by its machine description.

    Everyone assumes that these must always be the same.
    That assumption is proven to be incorrect.

    When one assumes a halt decider based on a UTM then the simulated input
    remains stuck in recursive simulation never reaching simulated states
    ⟨Ĥ.∞⟩ or ⟨Ĥ.qn⟩.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞ Ĥ.q0 ⟨Ĥ⟩ ⊢*
    Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    ⊢* Ĥ.qn

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    on and on never reaching any simulated final state of ⟨Ĥ.qn⟩

    When embedded_H aborts its simulation and transitions to Ĥ.qn on the
    basis that its simulated input cannot possibly reach its own simulated
    final halt state of ⟨Ĥ.qn⟩ embedded_H is correct.

    This causes embedded_H itself to halt, thus contradicting its result
    *only if a Turing machine decider can be applied to its actual self* and >>> not merely its own machine description.

    Your Ĥ is not a halt decider as defined by the Halting Problem so has
    nothing to do with the Halting Problem.

    /Flibble

    You have this part incorrectly. Ask Richard because
    of what he explained to you the other night he may
    correct you on this.


    Ĥ isn't a halt decider, H is. Ĥ is just the pathological machine given
    as its input.

    You are just confusing yourself with your lies.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Mon Aug 4 20:25:47 2025
    From Newsgroup: comp.ai.philosophy

    On 8/4/2025 8:18 PM, Richard Damon wrote:
    On 8/4/25 6:42 PM, olcott wrote:
    On 8/4/2025 5:34 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 13:29:04 -0500, olcott wrote:

    Diagonalization only arises when one assumes that a Turing machine
    decider must report on its own behavior instead of the behavior
    specified by its machine description.

    Everyone assumes that these must always be the same.
    That assumption is proven to be incorrect.

    When one assumes a halt decider based on a UTM then the simulated input >>>> remains stuck in recursive simulation never reaching simulated states
    ⟨Ĥ.∞⟩ or ⟨Ĥ.qn⟩.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞ Ĥ.q0 ⟨Ĥ⟩ ⊢*
    Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    ⊢* Ĥ.qn

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    on and on never reaching any simulated final state of ⟨Ĥ.qn⟩

    When embedded_H aborts its simulation and transitions to Ĥ.qn on the
    basis that its simulated input cannot possibly reach its own simulated >>>> final halt state of ⟨Ĥ.qn⟩ embedded_H is correct.

    This causes embedded_H itself to halt, thus contradicting its result
    *only if a Turing machine decider can be applied to its actual self*
    and
    not merely its own machine description.

    Your Ĥ is not a halt decider as defined by the Halting Problem so has
    nothing to do with the Halting Problem.

    /Flibble

    You have this part incorrectly. Ask Richard because
    of what he explained to you the other night he may
    correct you on this.


    Ĥ isn't a halt decider, H is.

    That is quite a bit less than perfectly
    accurate see if you can do better.
    H is a what?
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Mon Aug 4 21:36:19 2025
    From Newsgroup: comp.ai.philosophy

    On 8/4/25 9:25 PM, olcott wrote:
    On 8/4/2025 8:18 PM, Richard Damon wrote:
    On 8/4/25 6:42 PM, olcott wrote:
    On 8/4/2025 5:34 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 13:29:04 -0500, olcott wrote:

    Diagonalization only arises when one assumes that a Turing machine
    decider must report on its own behavior instead of the behavior
    specified by its machine description.

    Everyone assumes that these must always be the same.
    That assumption is proven to be incorrect.

    When one assumes a halt decider based on a UTM then the simulated
    input
    remains stuck in recursive simulation never reaching simulated states >>>>> ⟨Ĥ.∞⟩ or ⟨Ĥ.qn⟩.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞ Ĥ.q0 ⟨Ĥ⟩ ⊢*
    Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    ⊢* Ĥ.qn

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ >>>>> (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    on and on never reaching any simulated final state of ⟨Ĥ.qn⟩

    When embedded_H aborts its simulation and transitions to Ĥ.qn on the >>>>> basis that its simulated input cannot possibly reach its own simulated >>>>> final halt state of ⟨Ĥ.qn⟩ embedded_H is correct.

    This causes embedded_H itself to halt, thus contradicting its result >>>>> *only if a Turing machine decider can be applied to its actual
    self* and
    not merely its own machine description.

    Your Ĥ is not a halt decider as defined by the Halting Problem so has >>>> nothing to do with the Halting Problem.

    /Flibble

    You have this part incorrectly. Ask Richard because
    of what he explained to you the other night he may
    correct you on this.


    Ĥ isn't a halt decider, H is.

    That is quite a bit less than perfectly
    accurate see if you can do better.
    H is a what?



    H is supposed to be a Halt Decider.

    WHich means its input is a representation/description of a program.

    And the "behavior" of that input, is the behavior of the program its
    input describes.

    What is so hard about that?

    Just like Peter Olcott isn't smart, Richard is.

    The English grammar connects both the isn't and is to the same attribute.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Mon Aug 4 20:41:43 2025
    From Newsgroup: comp.ai.philosophy

    On 8/4/2025 8:36 PM, Richard Damon wrote:
    On 8/4/25 9:25 PM, olcott wrote:
    On 8/4/2025 8:18 PM, Richard Damon wrote:
    On 8/4/25 6:42 PM, olcott wrote:
    On 8/4/2025 5:34 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 13:29:04 -0500, olcott wrote:

    Diagonalization only arises when one assumes that a Turing machine >>>>>> decider must report on its own behavior instead of the behavior
    specified by its machine description.

    Everyone assumes that these must always be the same.
    That assumption is proven to be incorrect.

    When one assumes a halt decider based on a UTM then the simulated >>>>>> input
    remains stuck in recursive simulation never reaching simulated states >>>>>> ⟨Ĥ.∞⟩ or ⟨Ĥ.qn⟩.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞ Ĥ.q0 ⟨Ĥ⟩ ⊢*
    Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    ⊢* Ĥ.qn

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ >>>>>> (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    on and on never reaching any simulated final state of ⟨Ĥ.qn⟩

    When embedded_H aborts its simulation and transitions to Ĥ.qn on the >>>>>> basis that its simulated input cannot possibly reach its own
    simulated
    final halt state of ⟨Ĥ.qn⟩ embedded_H is correct.

    This causes embedded_H itself to halt, thus contradicting its result >>>>>> *only if a Turing machine decider can be applied to its actual
    self* and
    not merely its own machine description.

    Your Ĥ is not a halt decider as defined by the Halting Problem so has >>>>> nothing to do with the Halting Problem.

    /Flibble

    You have this part incorrectly. Ask Richard because
    of what he explained to you the other night he may
    correct you on this.


    Ĥ isn't a halt decider, H is.

    That is quite a bit less than perfectly
    accurate see if you can do better.
    H is a what?



    H is supposed to be a Halt Decider.


    More precisely H is hypothesized to be a halt decider.
    If H actually was an actual halt decider
    (as you initially stated) then the Halting Problem
    proof would be over before it began.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Mon Aug 4 21:55:50 2025
    From Newsgroup: comp.ai.philosophy

    On 8/4/25 9:41 PM, olcott wrote:
    On 8/4/2025 8:36 PM, Richard Damon wrote:
    On 8/4/25 9:25 PM, olcott wrote:
    On 8/4/2025 8:18 PM, Richard Damon wrote:
    On 8/4/25 6:42 PM, olcott wrote:
    On 8/4/2025 5:34 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 13:29:04 -0500, olcott wrote:

    Diagonalization only arises when one assumes that a Turing machine >>>>>>> decider must report on its own behavior instead of the behavior
    specified by its machine description.

    Everyone assumes that these must always be the same.
    That assumption is proven to be incorrect.

    When one assumes a halt decider based on a UTM then the simulated >>>>>>> input
    remains stuck in recursive simulation never reaching simulated
    states
    ⟨Ĥ.∞⟩ or ⟨Ĥ.qn⟩.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞ Ĥ.q0 ⟨Ĥ⟩ ⊢*
    Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    ⊢* Ĥ.qn

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ >>>>>>> (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    on and on never reaching any simulated final state of ⟨Ĥ.qn⟩ >>>>>>>
    When embedded_H aborts its simulation and transitions to Ĥ.qn on the >>>>>>> basis that its simulated input cannot possibly reach its own
    simulated
    final halt state of ⟨Ĥ.qn⟩ embedded_H is correct.

    This causes embedded_H itself to halt, thus contradicting its result >>>>>>> *only if a Turing machine decider can be applied to its actual
    self* and
    not merely its own machine description.

    Your Ĥ is not a halt decider as defined by the Halting Problem so has >>>>>> nothing to do with the Halting Problem.

    /Flibble

    You have this part incorrectly. Ask Richard because
    of what he explained to you the other night he may
    correct you on this.


    Ĥ isn't a halt decider, H is.

    That is quite a bit less than perfectly
    accurate see if you can do better.
    H is a what?



    H is supposed to be a Halt Decider.


    More precisely H is hypothesized to be a halt decider.
    If H actually was an actual halt decider
    (as you initially stated) then the Halting Problem
    proof would be over before it began.


    No, H is a hypothetical Halt Decider, or a claimed halt decider,
    depending on which method of proof you are using.

    Defining H to be a Halt Decider is what Linz uses, and then shows that
    this leads to a contradiction, and thus something newly defined must
    have been in error, and the only thing that could have been in error
    with defining H to be a Halt Decider, and thus it couldn't be.

    Your problem is you just don't understand the concept of Proof by Contradiction, which isn't that bad, as it is a somewhat confusing method.

    The method I prefer, but which isn't the Linz proof, is to talk about it
    as a claimed halt decider, and show that it is necessarily wrong, as
    that is less logically jaring than finding out something taken as a fact
    is just non-existing.

    What you can't do is claim H is correct, and not conform to the
    requirements of a Halt Decider.

    With the "claimed" method, the conditions become not "what it does" but
    what it needs to do for H to be correct.

    Thus for H (Ĥ) (Ĥ) to go to Qn and be correct, then Ĥ (Ĥ) must not halt, but if H goes to qn, so does Ĥ and then halts, so H is just wrong going
    to Qn, just as it is wrong for going to Qy as that requires Ĥ to halt,
    but it runs into an infinite loop.

    Note, the conditions are on *H* to be correct, not Ĥ.

    Ĥ is "correct" if it makes H wrong, so Ĥ turns out to always be correct,
    or it did its job.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Mon Aug 4 21:02:29 2025
    From Newsgroup: comp.ai.philosophy

    On 8/4/2025 8:55 PM, Richard Damon wrote:
    On 8/4/25 9:41 PM, olcott wrote:
    On 8/4/2025 8:36 PM, Richard Damon wrote:
    On 8/4/25 9:25 PM, olcott wrote:
    On 8/4/2025 8:18 PM, Richard Damon wrote:
    On 8/4/25 6:42 PM, olcott wrote:
    On 8/4/2025 5:34 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 13:29:04 -0500, olcott wrote:

    Diagonalization only arises when one assumes that a Turing machine >>>>>>>> decider must report on its own behavior instead of the behavior >>>>>>>> specified by its machine description.

    Everyone assumes that these must always be the same.
    That assumption is proven to be incorrect.

    When one assumes a halt decider based on a UTM then the
    simulated input
    remains stuck in recursive simulation never reaching simulated >>>>>>>> states
    ⟨Ĥ.∞⟩ or ⟨Ĥ.qn⟩.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞ Ĥ.q0 ⟨Ĥ⟩ ⊢*
    Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    ⊢* Ĥ.qn

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ >>>>>>>> (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    on and on never reaching any simulated final state of ⟨Ĥ.qn⟩ >>>>>>>>
    When embedded_H aborts its simulation and transitions to Ĥ.qn on >>>>>>>> the
    basis that its simulated input cannot possibly reach its own
    simulated
    final halt state of ⟨Ĥ.qn⟩ embedded_H is correct.

    This causes embedded_H itself to halt, thus contradicting its >>>>>>>> result
    *only if a Turing machine decider can be applied to its actual >>>>>>>> self* and
    not merely its own machine description.

    Your Ĥ is not a halt decider as defined by the Halting Problem so >>>>>>> has
    nothing to do with the Halting Problem.

    /Flibble

    You have this part incorrectly. Ask Richard because
    of what he explained to you the other night he may
    correct you on this.


    Ĥ isn't a halt decider, H is.

    That is quite a bit less than perfectly
    accurate see if you can do better.
    H is a what?



    H is supposed to be a Halt Decider.


    More precisely H is hypothesized to be a halt decider.
    If H actually was an actual halt decider
    (as you initially stated) then the Halting Problem
    proof would be over before it began.


    No, H is a hypothetical Halt Decider, or a claimed halt decider,
    depending on which method of proof you are using.


    If you pay close attention you will notice
    that is what I said in my first line above:
    "More precisely H is hypothesized to be a halt decider"

    It was you that initially said:
    "Ĥ isn't a halt decider, H is."
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Tue Aug 5 06:57:09 2025
    From Newsgroup: comp.ai.philosophy

    On 8/4/25 10:02 PM, olcott wrote:
    On 8/4/2025 8:55 PM, Richard Damon wrote:
    On 8/4/25 9:41 PM, olcott wrote:
    On 8/4/2025 8:36 PM, Richard Damon wrote:
    On 8/4/25 9:25 PM, olcott wrote:
    On 8/4/2025 8:18 PM, Richard Damon wrote:
    On 8/4/25 6:42 PM, olcott wrote:
    On 8/4/2025 5:34 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 13:29:04 -0500, olcott wrote:

    Diagonalization only arises when one assumes that a Turing machine >>>>>>>>> decider must report on its own behavior instead of the behavior >>>>>>>>> specified by its machine description.

    Everyone assumes that these must always be the same.
    That assumption is proven to be incorrect.

    When one assumes a halt decider based on a UTM then the
    simulated input
    remains stuck in recursive simulation never reaching simulated >>>>>>>>> states
    ⟨Ĥ.∞⟩ or ⟨Ĥ.qn⟩.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞ Ĥ.q0 ⟨Ĥ⟩ ⊢*
    Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    ⊢* Ĥ.qn

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ >>>>>>>>> (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    on and on never reaching any simulated final state of ⟨Ĥ.qn⟩ >>>>>>>>>
    When embedded_H aborts its simulation and transitions to Ĥ.qn >>>>>>>>> on the
    basis that its simulated input cannot possibly reach its own >>>>>>>>> simulated
    final halt state of ⟨Ĥ.qn⟩ embedded_H is correct.

    This causes embedded_H itself to halt, thus contradicting its >>>>>>>>> result
    *only if a Turing machine decider can be applied to its actual >>>>>>>>> self* and
    not merely its own machine description.

    Your Ĥ is not a halt decider as defined by the Halting Problem >>>>>>>> so has
    nothing to do with the Halting Problem.

    /Flibble

    You have this part incorrectly. Ask Richard because
    of what he explained to you the other night he may
    correct you on this.


    Ĥ isn't a halt decider, H is.

    That is quite a bit less than perfectly
    accurate see if you can do better.
    H is a what?



    H is supposed to be a Halt Decider.


    More precisely H is hypothesized to be a halt decider.
    If H actually was an actual halt decider
    (as you initially stated) then the Halting Problem
    proof would be over before it began.


    No, H is a hypothetical Halt Decider, or a claimed halt decider,
    depending on which method of proof you are using.


    If you pay close attention you will notice
    that is what I said in my first line above:
    "More precisely H is hypothesized to be a halt decider"

    Which means, such an H must behave exactly like a Halt Decider for our hypothesis to be possible.


    It was you that initially said:
    "Ĥ isn't a halt decider, H is."

    And H^ isn't a Halt Decider, PERIOD.

    It never was given the requirements to act like one.

    It has a halt decider inside it, that it used, but that doesn't make it
    one itself.

    Unless you consider yourself just a piece of shit, because you have one
    inside you.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Tue Aug 5 11:10:57 2025
    From Newsgroup: comp.ai.philosophy

    On 8/5/2025 5:57 AM, Richard Damon wrote:
    On 8/4/25 10:02 PM, olcott wrote:
    On 8/4/2025 8:55 PM, Richard Damon wrote:
    On 8/4/25 9:41 PM, olcott wrote:

    More precisely H is hypothesized to be a halt decider.
    If H actually was an actual halt decider
    (as you initially stated) then the Halting Problem
    proof would be over before it began.


    No, H is a hypothetical Halt Decider, or a claimed halt decider,
    depending on which method of proof you are using.


    If you pay close attention you will notice
    that is what I said in my first line above:
    "More precisely H is hypothesized to be a halt decider"

    Which means, such an H must behave exactly like a Halt Decider for our hypothesis to be possible.


    Yes


    It was you that initially said:
    "Ĥ isn't a halt decider, H is."

    And H^ isn't a Halt Decider, PERIOD.

    It never was given the requirements to act like one.

    It has a halt decider inside it,

    Not exactly. It has the hypothetical halt decider H
    embedded within it.

    that it used, but that doesn't make it
    one itself.

    Unless you consider yourself just a piece of shit, because you have one inside you.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Tue Aug 5 18:35:00 2025
    From Newsgroup: comp.ai.philosophy

    On 8/5/25 12:10 PM, olcott wrote:
    On 8/5/2025 5:57 AM, Richard Damon wrote:
    On 8/4/25 10:02 PM, olcott wrote:
    On 8/4/2025 8:55 PM, Richard Damon wrote:
    On 8/4/25 9:41 PM, olcott wrote:

    More precisely H is hypothesized to be a halt decider.
    If H actually was an actual halt decider
    (as you initially stated) then the Halting Problem
    proof would be over before it began.


    No, H is a hypothetical Halt Decider, or a claimed halt decider,
    depending on which method of proof you are using.


    If you pay close attention you will notice
    that is what I said in my first line above:
    "More precisely H is hypothesized to be a halt decider"

    Which means, such an H must behave exactly like a Halt Decider for our
    hypothesis to be possible.


    Yes


    It was you that initially said:
    "Ĥ isn't a halt decider, H is."

    And H^ isn't a Halt Decider, PERIOD.

    It never was given the requirements to act like one.

    It has a halt decider inside it,

    Not exactly. It has the hypothetical halt decider H
    embedded within it.

    No, it has *H* embedded in it, not something else.

    That H is only a hypothetical Halt Decider, but it is EXACTLY that H
    that is embedded in it.

    And thus since that H returns 0, it halts.


    that it used, but that doesn't make it one itself.

    Unless you consider yourself just a piece of shit, because you have
    one inside you.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Tue Aug 5 19:40:21 2025
    From Newsgroup: comp.ai.philosophy

    On 8/5/2025 5:35 PM, Richard Damon wrote:
    On 8/5/25 12:10 PM, olcott wrote:
    On 8/5/2025 5:57 AM, Richard Damon wrote:
    On 8/4/25 10:02 PM, olcott wrote:
    On 8/4/2025 8:55 PM, Richard Damon wrote:
    On 8/4/25 9:41 PM, olcott wrote:

    More precisely H is hypothesized to be a halt decider.
    If H actually was an actual halt decider
    (as you initially stated) then the Halting Problem
    proof would be over before it began.


    No, H is a hypothetical Halt Decider, or a claimed halt decider,
    depending on which method of proof you are using.


    If you pay close attention you will notice
    that is what I said in my first line above:
    "More precisely H is hypothesized to be a halt decider"

    Which means, such an H must behave exactly like a Halt Decider for
    our hypothesis to be possible.


    Yes


    It was you that initially said:
    "Ĥ isn't a halt decider, H is."

    And H^ isn't a Halt Decider, PERIOD.

    It never was given the requirements to act like one.

    It has a halt decider inside it,

    Not exactly. It has the hypothetical halt decider H
    embedded within it.

    No, it has *H* embedded in it, not something else.


    Yes that is what I just said.

    That H is only a hypothetical Halt Decider, but it is EXACTLY that H
    that is embedded in it.


    Yes it is. You initially called it a halt decider.

    And thus since that H returns 0, it halts.


    No it transitions to its own internal state of Ĥ.qn

    When we construe that embedded H as a halt decider with
    only ⟨Ĥ⟩ ⟨Ĥ⟩ as its entire domain and that embedded H
    simulates its input a finite number of steps before
    transitioning to Ĥ.qn then that embedded H did not
    correctly predict its own behavior.


    that it used, but that doesn't make it one itself.

    Unless you consider yourself just a piece of shit, because you have
    one inside you.



    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Tue Aug 5 20:03:50 2025
    From Newsgroup: comp.ai.philosophy

    On 8/5/2025 5:57 AM, Richard Damon wrote:
    On 8/4/25 10:02 PM, olcott wrote:
    On 8/4/2025 8:55 PM, Richard Damon wrote:
    On 8/4/25 9:41 PM, olcott wrote:
    On 8/4/2025 8:36 PM, Richard Damon wrote:
    On 8/4/25 9:25 PM, olcott wrote:
    On 8/4/2025 8:18 PM, Richard Damon wrote:
    On 8/4/25 6:42 PM, olcott wrote:
    On 8/4/2025 5:34 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 13:29:04 -0500, olcott wrote:

    Diagonalization only arises when one assumes that a Turing >>>>>>>>>> machine
    decider must report on its own behavior instead of the behavior >>>>>>>>>> specified by its machine description.

    Everyone assumes that these must always be the same.
    That assumption is proven to be incorrect.

    When one assumes a halt decider based on a UTM then the
    simulated input
    remains stuck in recursive simulation never reaching simulated >>>>>>>>>> states
    ⟨Ĥ.∞⟩ or ⟨Ĥ.qn⟩.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞ Ĥ.q0 ⟨Ĥ⟩ ⊢*
    Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    ⊢* Ĥ.qn

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    on and on never reaching any simulated final state of ⟨Ĥ.qn⟩ >>>>>>>>>>
    When embedded_H aborts its simulation and transitions to Ĥ.qn >>>>>>>>>> on the
    basis that its simulated input cannot possibly reach its own >>>>>>>>>> simulated
    final halt state of ⟨Ĥ.qn⟩ embedded_H is correct.

    This causes embedded_H itself to halt, thus contradicting its >>>>>>>>>> result
    *only if a Turing machine decider can be applied to its actual >>>>>>>>>> self* and
    not merely its own machine description.

    Your Ĥ is not a halt decider as defined by the Halting Problem >>>>>>>>> so has
    nothing to do with the Halting Problem.

    /Flibble

    You have this part incorrectly. Ask Richard because
    of what he explained to you the other night he may
    correct you on this.


    Ĥ isn't a halt decider, H is.

    That is quite a bit less than perfectly
    accurate see if you can do better.
    H is a what?



    H is supposed to be a Halt Decider.


    More precisely H is hypothesized to be a halt decider.
    If H actually was an actual halt decider
    (as you initially stated) then the Halting Problem
    proof would be over before it began.


    No, H is a hypothetical Halt Decider, or a claimed halt decider,
    depending on which method of proof you are using.


    If you pay close attention you will notice
    that is what I said in my first line above:
    "More precisely H is hypothesized to be a halt decider"

    Which means, such an H must behave exactly like a Halt Decider for our hypothesis to be possible.


    It was you that initially said:
    "Ĥ isn't a halt decider, H is."

    And H^ isn't a Halt Decider, PERIOD.

    It never was given the requirements to act like one.

    It has a halt decider inside it, that it used, but that doesn't make it
    one itself.

    Unless you consider yourself just a piece of shit, because you have one inside you.

    I am merely correctly your lack of precision.
    I have made these same lack of precision mistakes.

    You said that H is a halt decider and this is not
    precisely correct. H is a hypothetical halt decider
    that Linz believes he proved does not exist.

    Are we on the same page on this point now?
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Tue Aug 5 22:30:02 2025
    From Newsgroup: comp.ai.philosophy

    On 8/5/25 8:40 PM, olcott wrote:
    On 8/5/2025 5:35 PM, Richard Damon wrote:
    On 8/5/25 12:10 PM, olcott wrote:
    On 8/5/2025 5:57 AM, Richard Damon wrote:
    On 8/4/25 10:02 PM, olcott wrote:
    On 8/4/2025 8:55 PM, Richard Damon wrote:
    On 8/4/25 9:41 PM, olcott wrote:

    More precisely H is hypothesized to be a halt decider.
    If H actually was an actual halt decider
    (as you initially stated) then the Halting Problem
    proof would be over before it began.


    No, H is a hypothetical Halt Decider, or a claimed halt decider,
    depending on which method of proof you are using.


    If you pay close attention you will notice
    that is what I said in my first line above:
    "More precisely H is hypothesized to be a halt decider"

    Which means, such an H must behave exactly like a Halt Decider for
    our hypothesis to be possible.


    Yes


    It was you that initially said:
    "Ĥ isn't a halt decider, H is."

    And H^ isn't a Halt Decider, PERIOD.

    It never was given the requirements to act like one.

    It has a halt decider inside it,

    Not exactly. It has the hypothetical halt decider H
    embedded within it.

    No, it has *H* embedded in it, not something else.


    Yes that is what I just said.

    Ok, so then why do you think that H that H^ calls will act differently
    for the same input as the H that is trying to make the decision on the outside?


    That H is only a hypothetical Halt Decider, but it is EXACTLY that H
    that is embedded in it.


    Yes it is. You initially called it a halt decider.

    Because in Linz, it is. H is the name of the presumed halt decider,
    whose existance is proven to be non-existant.

    It seems you don't understand that logic.


    And thus since that H returns 0, it halts.


    No it transitions to its own internal state of Ĥ.qn

    Which in your model is returning 0


    When we construe that embedded H as a halt decider with
    only ⟨Ĥ⟩ ⟨Ĥ⟩ as its entire domain and that embedded H
    simulates its input a finite number of steps before
    transitioning to Ĥ.qn then that embedded H did not
    correctly predict its own behavior.

    But that is a contradiction of terms.

    embedded_H needs to be EXACTLY the same thing as H, except for the
    declaration that Qy is a final state.

    If you agree that embedded_H failed to be even that limited halt
    decider, then so does H, as they are the same thing.

    It seems you don't understand that universal fact of programs in this
    field, that all copies of a given algorithm, when given the same input,
    will give the same answer, and if they are supposed to be computing a
    specific mapping, that answer must match that mapping.

    To claim that H gave the right answer for that input, is also to claim
    that embedded_H, in doing the exact same thing gave teh right answer for
    that input, but since that doesn't match the criteria that it needed to follow, it most not be right, or just doesn't exist.




    that it used, but that doesn't make it one itself.

    Unless you consider yourself just a piece of shit, because you have
    one inside you.






    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,comp.ai.philosophy on Tue Aug 5 22:40:31 2025
    From Newsgroup: comp.ai.philosophy

    On 8/5/25 9:03 PM, olcott wrote:
    On 8/5/2025 5:57 AM, Richard Damon wrote:
    On 8/4/25 10:02 PM, olcott wrote:
    On 8/4/2025 8:55 PM, Richard Damon wrote:
    On 8/4/25 9:41 PM, olcott wrote:
    On 8/4/2025 8:36 PM, Richard Damon wrote:
    On 8/4/25 9:25 PM, olcott wrote:
    On 8/4/2025 8:18 PM, Richard Damon wrote:
    On 8/4/25 6:42 PM, olcott wrote:
    On 8/4/2025 5:34 PM, Mr Flibble wrote:
    On Mon, 04 Aug 2025 13:29:04 -0500, olcott wrote:

    Diagonalization only arises when one assumes that a Turing >>>>>>>>>>> machine
    decider must report on its own behavior instead of the behavior >>>>>>>>>>> specified by its machine description.

    Everyone assumes that these must always be the same.
    That assumption is proven to be incorrect.

    When one assumes a halt decider based on a UTM then the >>>>>>>>>>> simulated input
    remains stuck in recursive simulation never reaching
    simulated states
    ⟨Ĥ.∞⟩ or ⟨Ĥ.qn⟩.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞ Ĥ.q0 ⟨Ĥ⟩ ⊢*
    Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    ⊢* Ĥ.qn

    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    (d) simulated ⟨Ĥ⟩ copies its input ⟨Ĥ⟩
    (e) simulated ⟨Ĥ⟩ invokes simulated embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (f) simulated embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    on and on never reaching any simulated final state of ⟨Ĥ.qn⟩ >>>>>>>>>>>
    When embedded_H aborts its simulation and transitions to Ĥ.qn >>>>>>>>>>> on the
    basis that its simulated input cannot possibly reach its own >>>>>>>>>>> simulated
    final halt state of ⟨Ĥ.qn⟩ embedded_H is correct.

    This causes embedded_H itself to halt, thus contradicting its >>>>>>>>>>> result
    *only if a Turing machine decider can be applied to its >>>>>>>>>>> actual self* and
    not merely its own machine description.

    Your Ĥ is not a halt decider as defined by the Halting Problem >>>>>>>>>> so has
    nothing to do with the Halting Problem.

    /Flibble

    You have this part incorrectly. Ask Richard because
    of what he explained to you the other night he may
    correct you on this.


    Ĥ isn't a halt decider, H is.

    That is quite a bit less than perfectly
    accurate see if you can do better.
    H is a what?



    H is supposed to be a Halt Decider.


    More precisely H is hypothesized to be a halt decider.
    If H actually was an actual halt decider
    (as you initially stated) then the Halting Problem
    proof would be over before it began.


    No, H is a hypothetical Halt Decider, or a claimed halt decider,
    depending on which method of proof you are using.


    If you pay close attention you will notice
    that is what I said in my first line above:
    "More precisely H is hypothesized to be a halt decider"

    Which means, such an H must behave exactly like a Halt Decider for our
    hypothesis to be possible.


    It was you that initially said:
    "Ĥ isn't a halt decider, H is."

    And H^ isn't a Halt Decider, PERIOD.

    It never was given the requirements to act like one.

    It has a halt decider inside it, that it used, but that doesn't make
    it one itself.

    Unless you consider yourself just a piece of shit, because you have
    one inside you.

    I am merely correctly your lack of precision.
    I have made these same lack of precision mistakes.

    So you consider yourself a piece of shit?


    You said that H is a halt decider and this is not
    precisely correct. H is a hypothetical halt decider
    that Linz believes he proved does not exist.

    No, in Linz, which you quote, it *IS* a Halt Decider. Its existance has
    not been proven, just presumed, and shown in the end to not exist.

    But, during the proof, it isn't a machine that might be a halt decider,
    it IS a Halt Decider that we presumed to exist.

    This is a classical proof by contradiction.


    Are we on the same page on this point now?


    I don't think so, because it seem this sort of logic is beyond you.

    A Hypothetical Halt Decider, *IS* a Halt Decider, it is just that its
    actual existance hasn't been established yet.

    It is a different thing from the other sort of proof, where you have a decider, that is claimed to be a halt decider, that we need to see if it actually is one.

    In Linz, We have a set of machine that H can be chosen from, the set of
    Halt Deciders. That set is just empty as we find in the end.

    You don't seem to understand that you can talk about the properties of
    ALL things that are a member of a set, even if that set is empty.

    The Universal Qualifier does not assume existance.

    All of the moons of Earth made of Green Cheese are delicious, even if
    you don't like the taste of Green Cheese.
    --- Synchronet 3.21a-Linux NewsLink 1.2