• Re: D simulated by H cannot possibly reach its own simulated final halt state

    From olcott@polcott333@gmail.com to comp.theory,comp.lang.c++,comp.lang.c,comp.ai.philosophy on Tue Nov 25 10:09:50 2025
    From Newsgroup: comp.ai.philosophy

    On 11/25/2025 9:50 AM, Bonita Montero wrote:
    Am 25.11.2025 um 16:47 schrieb olcott:
    On 11/25/2025 9:20 AM, Bonita Montero wrote:
    What you do is like thinking in circles before falling asleep.
    It never ends. You're gonna die with that for sure sooner or later.


    I now have four different LLM AI models that prove
    that I am correct on the basis that they derive the
    proof steps that prove that I am correct.

    It don't matters if you're correct. There's no benefit in discussing
    such a theoretical topic for years. You won't even stop if everyone
    tells you're right.

    My whole purpose of this has been to establish a
    new foundation for correct reasoning that gets rid
    of Gödel Incompleteness and Tarski Undefinability
    such that Boolean True(Language L Expression E) is
    consistent and correct for the whole body of
    knowledge that can be expressed in language.

    The timing for such a system is perfect because it
    could solve the LLM AI reliability issues. Once
    it does that I will no longer need to talk about
    it on conventional forums. At that point all of
    my talks will be formal presentations at symposiums.


    Even Kimi that was dead set against me now fully
    understands my new formal foundation for correct
    reasoning.


    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,comp.lang.c++,comp.lang.c,comp.ai.philosophy on Tue Nov 25 17:33:58 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-11-25, olcott <polcott333@gmail.com> wrote:
    On 11/25/2025 9:50 AM, Bonita Montero wrote:
    Am 25.11.2025 um 16:47 schrieb olcott:
    On 11/25/2025 9:20 AM, Bonita Montero wrote:
    What you do is like thinking in circles before falling asleep.
    It never ends. You're gonna die with that for sure sooner or later.


    I now have four different LLM AI models that prove
    that I am correct on the basis that they derive the
    proof steps that prove that I am correct.

    It don't matters if you're correct. There's no benefit in discussing
    such a theoretical topic for years. You won't even stop if everyone
    tells you're right.

    My whole purpose of this has been to establish a
    new foundation for correct reasoning that gets rid

    Unfortunately, your reasoning was proven wrong before
    you were born, and your computer program does
    not show what you say it does.

    The timing for such a system is perfect because it
    could solve the LLM AI reliability issues.

    You have no idea how LLMs work and what is at the root of the LLM
    reliability issues, and how to even take the first step in fixing it.

    You have zero qualifications for doing anything like that, and no chance
    of developing the qualifications; that window is long gone.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2