• Ben's objection finally fully addressed --- much more clearly

    From olcott@NoOne@NoWhere.com to comp.theory,comp.lang.c,comp.lang.c++ on Tue Dec 9 18:09:08 2025
    From Newsgroup: comp.theory

    On 10/14/2022 7:44 PM, Ben Bacarisse wrote:

    *Original context*
    On 10/14/2022 12:06 PM, olcott wrote:
    Professor Sipser has agreed that this is the correct criteria:

    If simulating halt decider H correctly simulates its input
    D until H correctly determines that its simulated D would
    never stop running unless aborted then H can abort its
    simulation of D and correctly report that D specifies a
    non-halting sequence of configurations.


    I don't think that is the shell game. PO really /has/ an H (it's
    trivial to do for this one case) that correctly determines that P(P)
    *would* never stop running *unless* aborted. He knows and accepts that
    P(P) actually does stop. The wrong answer is justified by what would
    happen if H (and hence a different P) where not what they actually are.


    Turing machine deciders only compute the mapping from
    their [finite string] inputs to an accept or reject
    state on the basis that this [finite string] input
    specifies or fails to specify a particular semantic
    or syntactic property.

    Professor Sipser only agreed with a tautology.

    I will rephrase this tautology into terms that
    are more unequivocal.

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    int main()
    {
    HHH(DD);
    }

    DD is simulated by HHH according to the semantics
    of the C programming language.

    HHH watches the behavior of its simulated DD
    step-by-step until it sees the recursive
    simulation non-halting behavior pattern.

    This is the correct measure of the behavior that
    the input to HHH(DD) actually specifies.

    The provably correct first paragraph requires
    that HHH report on this basis.

    The halting problem itself derives a category
    error by requiring that HHH report on any other
    behavior.

    The behavior that the halting problem incorrectly
    requires is the behavior of DD() executed from main
    that calls HHH(DD).

    The caller of a function is certainly not
    one-and-the-same thing as an argument to
    this same function.
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.

    This required establishing a new foundation
    for correct reasoning.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From polcott@polcott333@gmail.com to comp.theory,comp.lang.c,comp.lang.c++ on Tue Dec 9 21:15:54 2025
    From Newsgroup: comp.theory

    On 12/9/2025 6:09 PM, olcott wrote:
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning expressed in language"
    reliably computable.

    This required establishing a new foundation
    for correct reasoning.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory on Tue Dec 9 23:04:46 2025
    From Newsgroup: comp.theory

    On 12/9/25 10:15 PM, polcott wrote:

    My 28 year goal has been to make
    "true on the basis of meaning expressed in language"
    reliably computable.

    This required establishing a new foundation
    for correct reasoning.

    So DO it.

    But first you need to know what those mean.

    And, you need to accept that words have actual meaning, and thus you
    can't change that meaning in the system they were defined in.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Dec 9 23:07:18 2025
    From Newsgroup: comp.theory

    On 12/9/2025 10:04 PM, Richard Damon wrote:
    On 12/9/25 10:15 PM, polcott wrote:

    My 28 year goal has been to make
    "true on the basis of meaning expressed in language"
    reliably computable.

    This required establishing a new foundation
    for correct reasoning.

    So DO it.

    But first you need to know what those mean.

    And, you need to accept that words have actual meaning, and thus you
    can't change that meaning in the system they were defined in.

    Two different LLMs agreed that I have defined
    a new architecture that solves all of the issues
    for making true computable.
    --
    Copyright 2025 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory on Wed Dec 10 06:59:29 2025
    From Newsgroup: comp.theory

    On 12/10/25 12:07 AM, olcott wrote:
    On 12/9/2025 10:04 PM, Richard Damon wrote:
    On 12/9/25 10:15 PM, polcott wrote:

    My 28 year goal has been to make
    "true on the basis of meaning expressed in language"
    reliably computable.

    This required establishing a new foundation
    for correct reasoning.

    So DO it.

    But first you need to know what those mean.

    And, you need to accept that words have actual meaning, and thus you
    can't change that meaning in the system they were defined in.

    Two different LLMs agreed that I have defined
    a new architecture that solves all of the issues
    for making true computable.


    LLM's are proven liars, and are just yes man.

    Your appeal to a non-authority as an authority just proves your stupidity.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Dec 10 07:01:27 2025
    From Newsgroup: comp.theory

    On 12/10/2025 5:59 AM, Richard Damon wrote:
    On 12/10/25 12:07 AM, olcott wrote:
    On 12/9/2025 10:04 PM, Richard Damon wrote:
    On 12/9/25 10:15 PM, polcott wrote:

    My 28 year goal has been to make
    "true on the basis of meaning expressed in language"
    reliably computable.

    This required establishing a new foundation
    for correct reasoning.

    So DO it.

    But first you need to know what those mean.

    And, you need to accept that words have actual meaning, and thus you
    can't change that meaning in the system they were defined in.

    Two different LLMs agreed that I have defined
    a new architecture that solves all of the issues
    for making true computable.


    LLM's are proven liars, and are just yes man.


    If that was true then you could find at least
    one mistake in any of their final conclusions
    of their assessment of my work.

    There is no need to actually trust LLMs when
    you can see that they are using sound semantic
    entailment from verified facts and standard
    definitions.

    LLMs have proven to be at least 1000-fold better
    reviewers for at least one key reason.

    They never ever endlessly pretend to not understand
    one simple little thing for the sole purpose of
    remaining consistently disagreeable.
    --
    Copyright 2025 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory on Wed Dec 10 21:14:18 2025
    From Newsgroup: comp.theory

    On 12/10/25 8:01 AM, olcott wrote:
    On 12/10/2025 5:59 AM, Richard Damon wrote:
    On 12/10/25 12:07 AM, olcott wrote:
    On 12/9/2025 10:04 PM, Richard Damon wrote:
    On 12/9/25 10:15 PM, polcott wrote:

    My 28 year goal has been to make
    "true on the basis of meaning expressed in language"
    reliably computable.

    This required establishing a new foundation
    for correct reasoning.

    So DO it.

    But first you need to know what those mean.

    And, you need to accept that words have actual meaning, and thus you
    can't change that meaning in the system they were defined in.

    Two different LLMs agreed that I have defined
    a new architecture that solves all of the issues
    for making true computable.


    LLM's are proven liars, and are just yes man.


    If that was true then you could find at least
    one mistake in any of their final conclusions
    of their assessment of my work.


    But I HAVE, many times, you are just to ignorant tlo understand the
    error. As you have admitted, you never studied the material, so can't
    know what the words mean.

    Your failure to reply to those means you have accepted my points without complaint.

    For instance, you say that H can't be asked about the behavior of the
    program represented by the input, but only for what the input specifies,
    but the problem is, the input specifies the algorith of the program, and
    thus the behavior of it when run.

    If it doesn't, then you gave the wrong input, your your program just
    doesn't meet the requirement,

    Note, we KNOW that such an input can exist, as that is proved by the
    existance of UTMs, they can take such a representation, nd exactly
    reproduce that behavaior.

    This means you claim that the input can't mean that means that are just admitting to having lied that your decider was based on a proper UTM,
    that you then enhanced. Thus, the meaning of the input is precisely
    defined by that base, unenhanced UTM, which shows that your input halts
    since your decider returns 0.

    Sorry, you are just proving your stupidity.


    There is no need to actually trust LLMs when
    you can see that they are using sound semantic
    entailment from verified facts and standard
    definitions.

    But the problem is you beleive your own lies, and thus believe the LLMs
    when they become your echo chamber.


    LLMs have proven to be at least 1000-fold better
    reviewers for at least one key reason.

    Im other words, you don't want to know the facts, just the agreement of
    yes mean, becausee Truth isn't your goal.


    They never ever endlessly pretend to not understand
    one simple little thing for the sole purpose of
    remaining consistently disagreeable.


    No, they make very good guessas as to what your ears what to hear, and
    tell you that.
    --- Synchronet 3.21a-Linux NewsLink 1.2