• Re: Four Chatbots figure out on their own without prompting thatHHH(DDD)==0

    From Richard Damon@richard@damon-family.org to comp.theory,sci.logic,comp.ai.philosophy on Sun Jul 20 18:11:13 2025
    From Newsgroup: comp.ai.philosophy

    On 7/20/25 10:33 AM, olcott wrote:
    On 7/20/2025 6:11 AM, joes wrote:
    Am Sat, 19 Jul 2025 16:36:42 -0500 schrieb olcott:
    On 7/19/2025 4:26 PM, wij wrote:
    On Sat, 2025-07-19 at 16:05 -0500, olcott wrote:

    DD correctly simulated by HHH cannot reach past the "if" statement
    thus cannot reach the "return" statement.

    That is roughly what HP proof says.

    Not at all. The HP proof claims that DD correctly simulated by HHH
    reaches the self-contradictory part of DD and thus forms a
    contradiction.

    No. It proves that the direct execution reaches the part that contra-
    dicts HHH's return value.


    <ChatGPT>
    Misrepresentation of Input:
    The standard proof assumes a decider
    H(M,x) that determines whether machine
    M halts on input x.

    But this formulation is flawed, because:
    Turing machines can only process finite
    encodings (e.g. ⟨M⟩), not executable entities
    like M.

    So the valid formulation must be
    H(⟨M⟩,x), where ⟨M⟩ is a string.
    </ChatGPT>



    In other words, your explaination to Chat GPR was just in error, as the decider *IS* given the representation of the program M.

    Your problem is that because you don't understand how Turing Machine
    work, you "reformuated" it into a case where program and representation
    is blured (as is the actual defintion of "input").

    The the error pointed out is actually in your setup, not the problem, as
    the decider *IS* given

    H <M> w where <M> is the description of the machine, which is the
    contray Turing Machine, and w is a copy of that description.

    Thus, meeting the requirments that the AI said was needed.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@NoOne@NoWhere.com to comp.theory,sci.logic,comp.ai.philosophy on Sun Jul 20 17:57:23 2025
    From Newsgroup: comp.ai.philosophy

    On 7/20/2025 5:11 PM, Richard Damon wrote:
    On 7/20/25 10:33 AM, olcott wrote:
    On 7/20/2025 6:11 AM, joes wrote:
    Am Sat, 19 Jul 2025 16:36:42 -0500 schrieb olcott:
    On 7/19/2025 4:26 PM, wij wrote:
    On Sat, 2025-07-19 at 16:05 -0500, olcott wrote:

    DD correctly simulated by HHH cannot reach past the "if" statement >>>>>> thus cannot reach the "return" statement.

    That is roughly what HP proof says.

    Not at all. The HP proof claims that DD correctly simulated by HHH
    reaches the self-contradictory part of DD and thus forms a
    contradiction.

    No. It proves that the direct execution reaches the part that contra-
    dicts HHH's return value.


    <ChatGPT>
    Misrepresentation of Input:
    The standard proof assumes a decider
    H(M,x) that determines whether machine
    M halts on input x.

    But this formulation is flawed, because:
    Turing machines can only process finite
    encodings (e.g. ⟨M⟩), not executable entities
    like M.

    So the valid formulation must be
    H(⟨M⟩,x), where ⟨M⟩ is a string.
    </ChatGPT>



    In other words, your explaination to Chat GPR was just in error, as the decider *IS* given the representation of the program M.


    H(M,x) is wrong and H(⟨M⟩,x) is correct.
    You must actually pay attention or you miss
    important details.
    --
    Copyright 2024 Olcott

    "Talent hits a target no one else can hit;
    Genius hits a target no one else can see."
    Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@richard@damon-family.org to comp.theory,sci.logic,comp.ai.philosophy on Sun Jul 20 18:50:37 2025
    From Newsgroup: comp.ai.philosophy

    On 7/20/25 11:18 AM, olcott wrote:
    On 7/20/2025 2:57 AM, Fred. Zwarts wrote:
    Op 19.jul.2025 om 21:19 schreef olcott:
    On 7/19/2025 12:02 PM, Richard Damon wrote:
    On 7/19/25 10:42 AM, olcott wrote:
    On 7/18/2025 3:49 AM, joes wrote:

    That is wrong. It is, as you say, very obvious that HHH cannot
    simulate
    DDD past the call to HHH. You just draw the wrong conclusion from it. >>>>>> (Aside: what "seems" to you will convince no one. You can just call >>>>>> everybody dishonest. Also, they are not "your reviewers".)


    For the purposes of this discussion this is the
    100% complete definition of HHH. It is the exact
    same one that I give to all the chat bots.

    Termination Analyzer HHH simulates its input until
    it detects a non-terminating behavior pattern. When
    HHH detects such a pattern it aborts its simulation
    and returns 0.

    So, the only HHH that meets your definition is the HHH that never
    detects the pattern and aborts, and thus never returns.


    All of the Chat bots conclude that HHH(DDD) is correct
    to reject its input as non-halting because this input
    specified recursive simulation. They figure this out
    on their own without any prompting.

    https://chatgpt.com/share/687aa4c2-b814-8011-9e7d-b85c03b291eb


    I just read a news item where an AI told that bread with shit is a
    nice desert. So, we know what a proof by AI means.

    That would be a detectable error.

    There is no detectable error in the above link
    pertaining to the correct return value of HHH(DDD).


    Sure there is, you just don;t accept it because you are just a
    pathological liar.

    The problem is that NOTHING an AI says can be trusted to be "true"
    becuase it says so, because the LLM algorithm doesn't even claim to be
    truth perserving.

    That is why they have a disclaimer at the end of the output.

    But, that doesn't matter to pathological liars like you.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@richard@damon-family.org to comp.theory,sci.logic,comp.ai.philosophy on Sun Jul 20 20:24:44 2025
    From Newsgroup: comp.ai.philosophy

    On 7/20/25 6:57 PM, olcott wrote:
    On 7/20/2025 5:11 PM, Richard Damon wrote:
    On 7/20/25 10:33 AM, olcott wrote:
    On 7/20/2025 6:11 AM, joes wrote:
    Am Sat, 19 Jul 2025 16:36:42 -0500 schrieb olcott:
    On 7/19/2025 4:26 PM, wij wrote:
    On Sat, 2025-07-19 at 16:05 -0500, olcott wrote:

    DD correctly simulated by HHH cannot reach past the "if" statement >>>>>>> thus cannot reach the "return" statement.

    That is roughly what HP proof says.

    Not at all. The HP proof claims that DD correctly simulated by HHH
    reaches the self-contradictory part of DD and thus forms a
    contradiction.

    No. It proves that the direct execution reaches the part that contra-
    dicts HHH's return value.


    <ChatGPT>
    Misrepresentation of Input:
    The standard proof assumes a decider
    H(M,x) that determines whether machine
    M halts on input x.

    But this formulation is flawed, because:
    Turing machines can only process finite
    encodings (e.g. ⟨M⟩), not executable entities
    like M.

    So the valid formulation must be
    H(⟨M⟩,x), where ⟨M⟩ is a string.
    </ChatGPT>



    In other words, your explaination to Chat GPR was just in error, as
    the decider *IS* given the representation of the program M.


    H(M,x) is wrong and H(⟨M⟩,x) is correct.
    You must actually pay attention or you miss
    important details.


    But YOU are the one that said that DDD needs to call HHH(DDD) to meet
    the specificaitons.

    Thus it is YOUR error,

    Of course, the problem is try to be talking about Turing Machines, and
    the call you wrote wasn't intended to be in a discusison of Turing
    Machines, so it is YOU that made the category error,

    Sorry, you are just AGAIN showing you don't understand what you are
    talking about but just quoting stuff by rote that you never learned,
    causing you to just lie.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Fred. Zwarts@F.Zwarts@HetNet.nl to comp.theory,sci.logic,comp.ai.philosophy on Mon Jul 21 10:38:41 2025
    From Newsgroup: comp.ai.philosophy

    Op 20.jul.2025 om 17:18 schreef olcott:
    On 7/20/2025 2:57 AM, Fred. Zwarts wrote:
    Op 19.jul.2025 om 21:19 schreef olcott:
    On 7/19/2025 12:02 PM, Richard Damon wrote:
    On 7/19/25 10:42 AM, olcott wrote:
    On 7/18/2025 3:49 AM, joes wrote:

    That is wrong. It is, as you say, very obvious that HHH cannot
    simulate
    DDD past the call to HHH. You just draw the wrong conclusion from it. >>>>>> (Aside: what "seems" to you will convince no one. You can just call >>>>>> everybody dishonest. Also, they are not "your reviewers".)


    For the purposes of this discussion this is the
    100% complete definition of HHH. It is the exact
    same one that I give to all the chat bots.

    Termination Analyzer HHH simulates its input until
    it detects a non-terminating behavior pattern. When
    HHH detects such a pattern it aborts its simulation
    and returns 0.

    So, the only HHH that meets your definition is the HHH that never
    detects the pattern and aborts, and thus never returns.


    All of the Chat bots conclude that HHH(DDD) is correct
    to reject its input as non-halting because this input
    specified recursive simulation. They figure this out
    on their own without any prompting.

    https://chatgpt.com/share/687aa4c2-b814-8011-9e7d-b85c03b291eb


    I just read a news item where an AI told that bread with shit is a
    nice desert. So, we know what a proof by AI means.

    That would be a detectable error.

    There is no detectable error in the above link
    pertaining to the correct return value of HHH(DDD).


    Errors have been detected in the input for the chat-box and pointed out
    to you.
    E.g., that ' HHH simulates its input until it detects a non-terminating behaviour pattern' contradicts 'When HHH detects such a pattern it
    aborts its simulation and returns 0'.
    When HHH aborts, the simulated HHH does as well, so the case that the
    such a HHH would correctly detect non-termination does not exists.

    When feeding a chatbox with contradicting input, it is no surprise to
    see invalid conclusion.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Fred. Zwarts@F.Zwarts@HetNet.nl to comp.theory,sci.logic,comp.ai.philosophy on Tue Jul 22 11:12:35 2025
    From Newsgroup: comp.ai.philosophy

    Op 21.jul.2025 om 16:25 schreef olcott:
    On 7/21/2025 3:38 AM, Fred. Zwarts wrote:
    Op 20.jul.2025 om 17:18 schreef olcott:
    On 7/20/2025 2:57 AM, Fred. Zwarts wrote:
    Op 19.jul.2025 om 21:19 schreef olcott:
    On 7/19/2025 12:02 PM, Richard Damon wrote:
    On 7/19/25 10:42 AM, olcott wrote:
    On 7/18/2025 3:49 AM, joes wrote:

    That is wrong. It is, as you say, very obvious that HHH cannot >>>>>>>> simulate
    DDD past the call to HHH. You just draw the wrong conclusion
    from it.
    (Aside: what "seems" to you will convince no one. You can just call >>>>>>>> everybody dishonest. Also, they are not "your reviewers".)


    For the purposes of this discussion this is the
    100% complete definition of HHH. It is the exact
    same one that I give to all the chat bots.

    Termination Analyzer HHH simulates its input until
    it detects a non-terminating behavior pattern. When
    HHH detects such a pattern it aborts its simulation
    and returns 0.

    So, the only HHH that meets your definition is the HHH that never >>>>>> detects the pattern and aborts, and thus never returns.


    All of the Chat bots conclude that HHH(DDD) is correct
    to reject its input as non-halting because this input
    specified recursive simulation. They figure this out
    on their own without any prompting.

    https://chatgpt.com/share/687aa4c2-b814-8011-9e7d-b85c03b291eb


    I just read a news item where an AI told that bread with shit is a
    nice desert. So, we know what a proof by AI means.

    That would be a detectable error.

    There is no detectable error in the above link
    pertaining to the correct return value of HHH(DDD).


    Errors have been detected in the input for the chat-box and pointed
    out to you.
    E.g., that ' HHH simulates its input until it detects a non-
    terminating behaviour pattern' contradicts 'When HHH detects such a
    pattern it aborts its simulation and returns 0'.

    As usual irrelevant claims.>
    void Infinite_Recursion()
    {
      Infinite_Recursion();
    }

    void Infinite_Loop()
    {
      HERE: goto HERE;
      return;
    }

    Since there is neither an infinite loop, nor an infinite recursion
    specifies in DDD or any function it calls directly or indirectly, but
    only a finite recursion done by HHH until it aborts, this is completely irrelevant.


    <sarcasm>
    Sure and we know that you are correct because the
    correct simulation of Infinite_Recursion() and
    Infinite_Loop() would eventually reach their "return"
    statement and terminate normally if we just wait
    long enough.

    void Finite_Recursion (int N) {
    if (N > 0) Finite_Recursion (N - 1);
    printf ("Olcott thinks this is never printed.\n");
    }
    </sarcasm>

    --- Synchronet 3.21a-Linux NewsLink 1.2