• Re: The halting problem as defined is a category error --- Flibble iscorrect

    From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Mon Jul 21 09:19:23 2025
    From Newsgroup: comp.ai.philosophy

    On 7/21/2025 3:31 AM, Fred. Zwarts wrote:
    Op 20.jul.2025 om 17:13 schreef olcott:
    On 7/20/2025 2:47 AM, Fred. Zwarts wrote:
    Op 19.jul.2025 om 17:50 schreef olcott:
    On 7/19/2025 2:50 AM, Fred. Zwarts wrote:

    No, the error in your definition has been pointed out to you many
    times.
    When the aborting HHH is simulated correctly, without disturbance,
    it reaches the final halt state.

    I could equally "point out" that all cats are dogs.
    Counter-factual statements carry no weight.

    Irrelevant.
    You cannot prove that cats are dogs, but the simulation by world class simulators prove that exactly the same input specifies a halting program.



    This trivial C function is the essence of my proof
    (Entire input to the four chat bots)

    <input>
    typedef void (*ptr)();
    int HHH(ptr P);

    void DDD()
    {
       HHH(DDD);
       return;
    }

    int main()
    {
       HHH(DDD);
    }

    Termination Analyzer HHH simulates its input until
    it detects a non-terminating behavior pattern. When
    HHH detects such a pattern it aborts its simulation
    and returns 0.
    </input>

    No rebuttal, but repeated counter-factual claims.


    All of the chat bots figure out on their own that the input
    to HHH(DDD) is correctly rejected as non-halting.

    No, we see that the detection of non-termination is the input for the
    chat-box, not its conclusion.


    https://chatgpt.com/c/687aa48e-6144-8011-a2be-c2840f15f285
    *Below is quoted from the above link*

    This creates a recursive simulation chain:
    HHH(DDD)
       -> simulates DDD()
            -> calls HHH(DDD)
                 -> simulates DDD()
                      -> calls HHH(DDD)
                           -> ...

    Wich is counter-factual, because we know that HHH aborts before this
    happens.
    *Best selling author of theory of computation textbooks*
    <MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
         If simulating halt decider H correctly simulates its
         input D until H correctly determines that its simulated D
         would never stop running unless aborted then

         H can abort its simulation of D and correctly report that D
         specifies a non-halting sequence of configurations.
    </MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>



    Irrelevant empty claim. No H can correctly simulate itself up to the
    end. Since D calls H and we know that H halts, we know that a correct simulation would show that H returns to D, after which D halts.
    So, D halts.
    The prerequisites 'correctly simulates' and 'correctly determines'
    cannot be true, therefore the conclusion is irrelevant. It makes that
    Sipser agreed to a vacuous statement.


    The correct measure of the behavior of the input to HHH(DDD)
    is DDD simulated by HHH according to the semantics of the C
    programming language.

    The behavior of the directly executed DDD() is not a correct
    measure of the behavior of the input to HHH(DDD) because the
    directly executed DDD() is not in the domain of HHH.

    Both ChatGPT and Claude.ai demonstrate the equivalent of
    complete understanding of this on the basis of their correct
    paraphrase of my reasoning.

    Although LLM systems are famous for hallucinations we
    can see that this is not the case with their evaluation
    of my work because their reasoning is sound.

    It is a fact that Turing machine deciders cannot take
    directly executed Turing machines as inputs.

    It is a fact that the Halting Problem proofs require
    a Turing machine decider to report on the behavior
    of the direct execution of another Turing machine.

    *That right there proves an error in the proof*
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Fred. Zwarts@F.Zwarts@HetNet.nl to comp.theory,sci.logic,comp.ai.philosophy on Tue Jul 22 11:08:11 2025
    From Newsgroup: comp.ai.philosophy

    Op 21.jul.2025 om 16:19 schreef olcott:
    On 7/21/2025 3:31 AM, Fred. Zwarts wrote:
    Op 20.jul.2025 om 17:13 schreef olcott:
    On 7/20/2025 2:47 AM, Fred. Zwarts wrote:
    Op 19.jul.2025 om 17:50 schreef olcott:
    On 7/19/2025 2:50 AM, Fred. Zwarts wrote:

    No, the error in your definition has been pointed out to you many >>>>>> times.
    When the aborting HHH is simulated correctly, without disturbance, >>>>>> it reaches the final halt state.

    I could equally "point out" that all cats are dogs.
    Counter-factual statements carry no weight.

    Irrelevant.
    You cannot prove that cats are dogs, but the simulation by world class
    simulators prove that exactly the same input specifies a halting program.



    This trivial C function is the essence of my proof
    (Entire input to the four chat bots)

    <input>
    typedef void (*ptr)();
    int HHH(ptr P);

    void DDD()
    {
       HHH(DDD);
       return;
    }

    int main()
    {
       HHH(DDD);
    }

    Termination Analyzer HHH simulates its input until
    it detects a non-terminating behavior pattern. When
    HHH detects such a pattern it aborts its simulation
    and returns 0.
    </input>

    No rebuttal, but repeated counter-factual claims.


    All of the chat bots figure out on their own that the input
    to HHH(DDD) is correctly rejected as non-halting.

    No, we see that the detection of non-termination is the input for
    the chat-box, not its conclusion.


    https://chatgpt.com/c/687aa48e-6144-8011-a2be-c2840f15f285
    *Below is quoted from the above link*

    This creates a recursive simulation chain:
    HHH(DDD)
       -> simulates DDD()
            -> calls HHH(DDD)
                 -> simulates DDD()
                      -> calls HHH(DDD)
                           -> ...

    Wich is counter-factual, because we know that HHH aborts before this
    happens.
    *Best selling author of theory of computation textbooks*
    <MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
         If simulating halt decider H correctly simulates its
         input D until H correctly determines that its simulated D
         would never stop running unless aborted then

         H can abort its simulation of D and correctly report that D
         specifies a non-halting sequence of configurations.
    </MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>



    Irrelevant empty claim. No H can correctly simulate itself up to the
    end. Since D calls H and we know that H halts, we know that a correct
    simulation would show that H returns to D, after which D halts.
    So, D halts.
    The prerequisites 'correctly simulates' and 'correctly determines'
    cannot be true, therefore the conclusion is irrelevant. It makes that
    Sipser agreed to a vacuous statement.

    As usual repeated claims without any new evidence, even though many
    errors in them have been pointed out earlier.



    The correct measure of the behavior of the input to HHH(DDD)
    is DDD simulated by HHH according to the semantics of the C
    programming language.

    The behavior of the directly executed DDD() is not a correct
    measure of the behavior of the input to HHH(DDD) because the
    directly executed DDD() is not in the domain of HHH.

    The HHH with bugs is not a correct measure for the behaviour specified
    in its input.

    HHH needs to report on the behaviour specified in its input. In this
    case the input specifies a DDD that calls a HHH, which aborts and
    returns, so the input specifies a halting program.
    The semantics of the C programming language allows only one behaviour,
    which is indeed seen in direct execution.
    If HHH cannot reproduce the behaviour specified in the input, it just fails.


    Both ChatGPT and Claude.ai demonstrate the equivalent of
    complete understanding of this on the basis of their correct
    paraphrase of my reasoning.

    Although LLM systems are famous for hallucinations we
    can see that this is not the case with their evaluation
    of my work because their reasoning is sound.

    It is a fact that Turing machine deciders cannot take
    directly executed Turing machines as inputs.

    It is a fact that the Halting Problem proofs require
    a Turing machine decider to report on the behavior
    of the direct execution of another Turing machine.

    *That right there proves an error in the proof*


    It only proves that chat-boxes generate nonsense when fed with nonsense.
    --- Synchronet 3.21a-Linux NewsLink 1.2