• Re: D simulated by H cannot possibly reach its own simulated finalhalt state

    From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Sun Nov 16 11:40:34 2025
    From Newsgroup: comp.ai.philosophy

    On 11/16/2025 11:13 AM, joes wrote:
    Am Sun, 16 Nov 2025 10:45:04 -0600 schrieb olcott:
    On 11/16/2025 10:24 AM, joes wrote:
    Am Sun, 16 Nov 2025 10:15:43 -0600 schrieb olcott:

    The question is not:
    Can H reach its own final halt state?
    The question is:
    Can D simulated by H reach its simulated final halt state?

    The second includes the first.

    It in not the job of H to report on its own behavior. H is the test
    program that only reports on the program under test.

    Yes it is. H, as a part of D, is also under test. That’s why you’re simulating it.


    When an input cheats and calls its own decider
    the decider cannot allow itself to be conned
    thus still must report that D simulated by H
    cannot possibly terminate normally.

    My 28 year long goal has been to make
    "true on the basis of meaning" computable.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Mon Nov 17 07:34:57 2025
    From Newsgroup: comp.ai.philosophy

    On 11/17/2025 2:46 AM, Mikko wrote:
    On 2025-11-16 16:15:43 +0000, olcott said:

    On 11/16/2025 9:39 AM, joes wrote:
    Am Fri, 14 Nov 2025 09:12:55 -0600 schrieb olcott:

    The Program under test and test program are separate.

    D includes H.


    The question is not:
    Can H reach its own final halt state?
    The question is:
    Can D simulated by H reach its simulated final halt state?

    If the question H is designed to answer is either one the
    H is not a halt decider. The question a halt decider would
    answer is:
    Does D halt if fully executed?


    Turing machine deciders only compute a mapping from
    their [finite string] inputs to an accept or reject
    state on the basis that this [finite string] input
    specifies or fails to specify a semantic or syntactic
    property.

    That the information that HHH is required to report
    on simply is not contained in its input is what makes
    the requirements wrong.
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bonita Montero@Bonita.Montero@gmail.com to comp.theory,comp.lang.c++,comp.lang.c,comp.ai.philosophy on Tue Nov 25 16:20:32 2025
    From Newsgroup: comp.ai.philosophy

    Am 06.11.2025 um 21:48 schrieb olcott:
    D simulated by H cannot possibly reach its own
    simulated final halt state.

    I am not going to talk about any non-nonsense of
    resuming a simulation after we already have this
    final answer.

    We just proved that the input to H(D) specifies
    non-halting. Anything beyond this is flogging a
    dead horse.


    news://news.eternal-september.org/20251104183329.967@kylheku.com

    On 11/4/2025 8:43 PM, Kaz Kylheku wrote:
    On 2025-11-05, olcott <polcott333@gmail.com> wrote:

    The whole point is that D simulated by H
    cannot possbly reach its own simulated
    "return" statement no matter what H does.

    Yes; this doesn't happen while H is running.

    So while H does /something/, no matter what H does,
    that D simulation won't reach the return statement.


    What you do is like thinking in circles before falling asleep.
    It never ends. You're gonna die with that for sure sooner or later.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.lang.c++,comp.lang.c,comp.ai.philosophy on Tue Nov 25 09:47:35 2025
    From Newsgroup: comp.ai.philosophy

    On 11/25/2025 9:20 AM, Bonita Montero wrote:
    Am 06.11.2025 um 21:48 schrieb olcott:
    D simulated by H cannot possibly reach its own
    simulated final halt state.

    I am not going to talk about any non-nonsense of
    resuming a simulation after we already have this
    final answer.

    We just proved that the input to H(D) specifies
    non-halting. Anything beyond this is flogging a
    dead horse.


    news://news.eternal-september.org/20251104183329.967@kylheku.com

    On 11/4/2025 8:43 PM, Kaz Kylheku wrote:
    On 2025-11-05, olcott <polcott333@gmail.com> wrote:

    The whole point is that D simulated by H
    cannot possbly reach its own simulated
    "return" statement no matter what H does.

    Yes; this doesn't happen while H is running.

    So while H does /something/, no matter what H does,
    that D simulation won't reach the return statement.


    What you do is like thinking in circles before falling asleep.
    It never ends. You're gonna die with that for sure sooner or later.


    I now have four different LLM AI models that prove
    that I am correct on the basis that they derive the
    proof steps that prove that I am correct.

    Even Kimi that was dead set against me now fully
    understands my new formal foundation for correct
    reasoning.
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bonita Montero@Bonita.Montero@gmail.com to comp.theory,comp.lang.c++,comp.lang.c,comp.ai.philosophy on Tue Nov 25 16:50:45 2025
    From Newsgroup: comp.ai.philosophy

    Am 25.11.2025 um 16:47 schrieb olcott:
    On 11/25/2025 9:20 AM, Bonita Montero wrote:
    What you do is like thinking in circles before falling asleep.
    It never ends. You're gonna die with that for sure sooner or later.


    I now have four different LLM AI models that prove
    that I am correct on the basis that they derive the
    proof steps that prove that I am correct.
    It don't matters if you're correct. There's no benefit in discussing
    such a theoretical topic for years. You won't even stop if everyone
    tells you're right.

    Even Kimi that was dead set against me now fully
    understands my new formal foundation for correct
    reasoning.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.lang.c++,comp.lang.c,comp.ai.philosophy on Tue Nov 25 11:37:27 2025
    From Newsgroup: comp.ai.philosophy

    On 11/25/25 10:47 AM, olcott wrote:
    On 11/25/2025 9:20 AM, Bonita Montero wrote:
    Am 06.11.2025 um 21:48 schrieb olcott:
    D simulated by H cannot possibly reach its own
    simulated final halt state.

    I am not going to talk about any non-nonsense of
    resuming a simulation after we already have this
    final answer.

    We just proved that the input to H(D) specifies
    non-halting. Anything beyond this is flogging a
    dead horse.


    news://news.eternal-september.org/20251104183329.967@kylheku.com

    On 11/4/2025 8:43 PM, Kaz Kylheku wrote:
    On 2025-11-05, olcott <polcott333@gmail.com> wrote:

    The whole point is that D simulated by H
    cannot possbly reach its own simulated
    "return" statement no matter what H does.

    Yes; this doesn't happen while H is running.

    So while H does /something/, no matter what H does,
    that D simulation won't reach the return statement.


    What you do is like thinking in circles before falling asleep.
    It never ends. You're gonna die with that for sure sooner or later.


    I now have four different LLM AI models that prove
    that I am correct on the basis that they derive the
    proof steps that prove that I am correct.

    Even Kimi that was dead set against me now fully
    understands my new formal foundation for correct
    reasoning.


    But they only "agree" with your arguement, because you LIE in that
    arguement that H CAN correctly determine the answer.

    Sorry, arguements based on LIES are just unsound, as you are proving
    that you are so fundamentally.

    All you are doing is proving that you are just an incredably stupid pathological liar that has no concept of what truth or logic actually is.

    That is why you believe your own lies, and reject the fact that people
    point out to you, as they don't match the lie of your definition of "truth".


    --- Synchronet 3.21a-Linux NewsLink 1.2