• Rejecting expressions of formal language having pathologicalself-reference

    From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 12 08:45:34 2025
    From Newsgroup: comp.ai.philosophy

    Rejecting expressions of formal language
    having pathological self-reference

    Explained how expressions with pathological self
    reference can simply be rejected as semantically/
    syntactically unsound thus preventing undefinability,
    and undecidability.

    This sentence is not true: "This sentence is not true"
    is true only because the inner sentence is semantically
    unsound. The inner sentence is formalized in Minimal
    Type Theory as LP := ~True(LP).
    (where A := B means A is defined as B).

    https://philpapers.org/rec/OLCREO

    Can someone review my actual reasoning
    elaborated in the paper?
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 12 11:57:04 2025
    From Newsgroup: comp.ai.philosophy

    On 11/12/2025 8:45 AM, olcott wrote:
    Rejecting expressions of formal language
    having pathological self-reference

    Explained how expressions with pathological self
    reference can simply be rejected as semantically/
    syntactically unsound thus preventing undefinability,
    and undecidability.

    This sentence is not true: "This sentence is not true"
    is true only because the inner sentence is semantically
    unsound. The inner sentence is formalized in Minimal
    Type Theory as LP := ~True(LP).
    (where A := B means A is defined as B).

    https://philpapers.org/rec/OLCREO

    Can someone review my actual reasoning
    elaborated in the paper?


    *ChatGPT critique of the above paper* https://chatgpt.com/share/6914ab34-4440-8011-9395-8bec2af5f82f

    The huge advantages of LLM systems is that they do not
    begin their review on the basis that [Olcott is wrong]
    is an axiom. No humans have ever been able to do this
    in thousands of reviews across dozens of forums.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alan Mackenzie@acm@muc.de to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 12 18:12:44 2025
    From Newsgroup: comp.ai.philosophy

    [ Followup-To: set ]

    In comp.theory olcott <polcott333@gmail.com> wrote:

    [ .... ]

    The huge advantages of LLM systems is that they do not
    begin their review on the basis that [Olcott is wrong]
    is an axiom. No humans have ever been able to do this
    in thousands of reviews across dozens of forums.

    The huge disadvantage of LLM systems is that they begin their review on
    the basis that Olcott is right. Intelligent people do not do this.
    They evaluate what Olcott has written and pronounce it either right or
    (much more usually) wrong.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --
    Alan Mackenzie (Nuremberg, Germany).

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 12 18:39:57 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-11-12, olcott <polcott333@gmail.com> wrote:
    The huge advantages of LLM systems is that they do not
    begin their review on the basis that [Olcott is wrong]
    is an axiom. No humans have ever been able to do this
    in thousands of reviews across dozens of forums.

    Your principal modus operandi is that you reject any piece of evidence
    which contradicts your set view in any matter.

    Here you are also doing it again: completely overlooking all the times
    someone has agreed with you in some point, and declaring that that
    everyon has a /personal/ bias against you.

    Anyway, that /is/ the right approach toward anything. Anything anyone
    says should be suspected of having a factual or logical flaw, until
    proven otherwise.

    Here is an idea for you: maybe try being right 95% of the time, for a
    while. Say two weeks, or a month. Instead of your usual, 95%+ wrong.

    There is a human bias at play here and I will explain it to you:
    people are more motivated to respond when you are wrong.
    If you're 80% wrong, the 0.8 fraction of your remarks that is
    wrong will get more engagement than the 0.2 that are right.
    Thus it might be that, among those of your remarks which fetch
    engagement, the fraction which are wrong might be amplified to
    something much higher, like 0.97.

    This is why social networking algorithms are rigged to spread
    rage bait: to drum up engagement.

    It's amazing you don't have the maturity to know all this on your own;
    that it has to be explained to a grown up.

    Most of us here struggle not to say an incorrect thing. in a comp.*
    newsgroup or elsewhere, yet here you practically made a sport out of it;
    you say some wrong shit more times in a day than an NBA player takes a
    shot at the hoop.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 12 12:52:32 2025
    From Newsgroup: comp.ai.philosophy

    On 11/12/2025 12:39 PM, Kaz Kylheku wrote:
    On 2025-11-12, olcott <polcott333@gmail.com> wrote:
    The huge advantages of LLM systems is that they do not
    begin their review on the basis that [Olcott is wrong]
    is an axiom. No humans have ever been able to do this
    in thousands of reviews across dozens of forums.

    Your principal modus operandi is that you reject any piece of evidence
    which contradicts your set view in any matter.


    Not at all. Not ever.

    Here you are also doing it again: completely overlooking all the times someone has agreed with you in some point, and declaring that that
    everyon has a /personal/ bias against you.


    People on these forums have only agreed with
    me at most 1% of the time and the only case
    besides Ben the agreement was on trivialities.

    You are still trying to get away with the utter
    nonsense that once correct non-termination
    behavior criteria have been correctly met
    that you can try again and get a different
    result. I would really like to think that
    you are not a damned liar, yet no alternative
    seems reasonably plausible.

    Anyway, that /is/ the right approach toward anything. Anything anyone
    says should be suspected of having a factual or logical flaw, until
    proven otherwise.


    Not when the basis of proof requires them
    to actually pay close attention when they
    are utterly unwilling to do this because they
    are so sure that I must be wrong.

    Here is an idea for you: maybe try being right 95% of the time, for a
    while. Say two weeks, or a month. Instead of your usual, 95%+ wrong.


    I have been completely right on the essence of
    what I have been saying for 22 years.

    There is a human bias at play here and I will explain it to you:
    people are more motivated to respond when you are wrong.

    OK some honesty, that it refreshing.

    If you're 80% wrong, the 0.8 fraction of your remarks that is
    wrong will get more engagement than the 0.2 that are right.
    Thus it might be that, among those of your remarks which fetch
    engagement, the fraction which are wrong might be amplified to
    something much higher, like 0.97.

    This is why social networking algorithms are rigged to spread
    rage bait: to drum up engagement.

    It's amazing you don't have the maturity to know all this on your own;
    that it has to be explained to a grown up.


    It is a verified fact that I have been continually
    correct in every essence that I have said for 22
    continuous years.

    Now I have LLM systems that show the complete details
    of exactly how and why I am correct. If they were
    simply "yes men" they could not possibly do this.

    Most of us here struggle not to say an incorrect thing. in a comp.*
    newsgroup or elsewhere, yet here you practically made a sport out of it;
    you say some wrong shit more times in a day than an NBA player takes a
    shot at the hoop.


    I say things that do not conform to conventional
    wisdom and people here don't even understand the
    reasoning behind conventional wisdom.

    When I point out the error in this reasoning people
    here are utterly helpless. The most they can do is
    say that I must be wrong entirely on the basis
    that I contradict conventional wisdom.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 13 02:22:38 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-11-12, olcott <polcott333@gmail.com> wrote:
    https://philpapers.org/rec/OLCREO

    This garbage doesn't even feign an attempt at hiding that it's a fucking
    chat session with Claude AI.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 12 20:32:52 2025
    From Newsgroup: comp.ai.philosophy

    On 11/12/2025 8:22 PM, Kaz Kylheku wrote:
    On 2025-11-12, olcott <polcott333@gmail.com> wrote:
    https://philpapers.org/rec/OLCREO

    This garbage doesn't even feign an attempt at hiding that it's a fucking
    chat session with Claude AI.


    Of course I don't. It proves that I am
    correct, try and find an actual error.
    I had ChatGPT look at this too.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauerproves
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 13 02:36:08 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-11-12, olcott <polcott333@gmail.com> wrote:
    On 11/12/2025 12:39 PM, Kaz Kylheku wrote:
    On 2025-11-12, olcott <polcott333@gmail.com> wrote:
    The huge advantages of LLM systems is that they do not
    begin their review on the basis that [Olcott is wrong]
    is an axiom. No humans have ever been able to do this
    in thousands of reviews across dozens of forums.

    Your principal modus operandi is that you reject any piece of evidence
    which contradicts your set view in any matter.

    Not at all. Not ever.

    Even now, in this post.

    Here you are also doing it again: completely overlooking all the times
    someone has agreed with you in some point, and declaring that that
    everyon has a /personal/ bias against you.

    People on these forums have only agreed with
    me at most 1% of the time and the only case

    1% is much larger than zero.

    besides Ben the agreement was on trivialities.

    Grasping trivialities is the extent of your skill, so that's
    all you get.

    For me, it's a triviality that your H's are deciding the D's
    wrongly, but at that point we go over your head.

    You are still trying to get away with the utter
    nonsense that once correct non-termination
    behavior criteria have been correctly met

    your criteria cannot decide the fact that the NON-DIAGONAL
    test case void DDD(void) { HHH(DDD); return; } terminates.

    The 1 return value is correct, and HHH(DDD) /can/ return it.
    DDD will not "behave opposite"; it will terminate as the 1 says.

    that you can try again and get a different
    result. I would really like to think that
    you are not a damned liar, yet no alternative
    seems reasonably plausible.

    Right; no alternative even seems plausible, like that:
    - Turing was right;
    - Church was right;
    - Goedel was right;
    - C. A. R. Hore was right;
    - P. Olcott was wrong;
    ....

    Nah, cannot be! Not even /plausible/ seeming.

    Anyway, that /is/ the right approach toward anything. Anything anyone
    says should be suspected of having a factual or logical flaw, until
    proven otherwise.


    Not when the basis of proof requires them
    to actually pay close attention when they
    are utterly unwilling to do this because they
    are so sure that I must be wrong.

    You have a small body of claims that you have repeated for years.

    The claims have been examined in excruciating detail by all
    your interlocutors.

    You have /never/ responded to any criticism. You will not analyze
    any rebuttal, look at any code, nothing.

    Just casual dismissals, accusations of lying, stupidity, ...

    Here is an idea for you: maybe try being right 95% of the time, for a
    while. Say two weeks, or a month. Instead of your usual, 95%+ wrong.

    I have been completely right on the essence of
    what I have been saying for 22 years.

    L'essence; that's French for "gas".

    There is a human bias at play here and I will explain it to you:
    people are more motivated to respond when you are wrong.

    OK some honesty, that it refreshing.

    If you're 80% wrong, the 0.8 fraction of your remarks that is
    wrong will get more engagement than the 0.2 that are right.
    Thus it might be that, among those of your remarks which fetch
    engagement, the fraction which are wrong might be amplified to
    something much higher, like 0.97.

    This is why social networking algorithms are rigged to spread
    rage bait: to drum up engagement.

    It's amazing you don't have the maturity to know all this on your own;
    that it has to be explained to a grown up.


    It is a verified fact that I have been continually

    Verified by what third party? Oh, you mean Claude AI and Chat GPT?

    Your casual dismissals and personal attacks do not verify anything.

    Get someone with serious academic credentials to "validate"
    your shit, then talk.

    Now I have LLM systems that show the complete details
    of exactly how and why I am correct. If they were
    simply "yes men" they could not possibly do this.

    The commercially available LLM systems provided by Anthropic,
    OpenAI and others all have system prompts telling them not to
    antagonize the user.

    An extremely common complaint (probably in the top five)
    is that they are "sycophantic".

    Most of us here struggle not to say an incorrect thing. in a comp.*
    newsgroup or elsewhere, yet here you practically made a sport out of it;
    you say some wrong shit more times in a day than an NBA player takes a
    shot at the hoop.


    I say things that do not conform to conventional
    wisdom and people here don't even understand the
    reasoning behind conventional wisdom.

    When I point out the error in this reasoning people
    here are utterly helpless. The most they can do is
    say that I must be wrong entirely on the basis
    that I contradict conventional wisdom.

    You do not contradict conventional wisdom; you contradict
    air-tight logic.

    Conventional wisdom is shit like "eating carrots makes you
    see sharper".

    Incomputability of halting is entirely unconventional; it took
    someone very smart to recognize the problem and answer it.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 13 02:38:58 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-11-13, olcott <polcott333@gmail.com> wrote:
    On 11/12/2025 8:22 PM, Kaz Kylheku wrote:
    On 2025-11-12, olcott <polcott333@gmail.com> wrote:
    https://philpapers.org/rec/OLCREO

    This garbage doesn't even feign an attempt at hiding that it's a fucking
    chat session with Claude AI.


    Of course I don't. It proves that I am
    correct, try and find an actual error.

    No CS academic is going to do anything but swipe left.

    You will have to do your own thinking and writing if you want to be
    taken seriously. Not that that's a guarantee (and we've all seen
    what your own writing looks like).

    I had ChatGPT look at this too.

    Oh well, then, that's an intellectual slam dunk, obviously.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 12 20:57:14 2025
    From Newsgroup: comp.ai.philosophy

    On 11/12/2025 8:36 PM, Kaz Kylheku wrote:
    On 2025-11-12, olcott <polcott333@gmail.com> wrote:
    On 11/12/2025 12:39 PM, Kaz Kylheku wrote:
    On 2025-11-12, olcott <polcott333@gmail.com> wrote:
    The huge advantages of LLM systems is that they do not
    begin their review on the basis that [Olcott is wrong]
    is an axiom. No humans have ever been able to do this
    in thousands of reviews across dozens of forums.

    Your principal modus operandi is that you reject any piece of evidence
    which contradicts your set view in any matter.

    Not at all. Not ever.

    Even now, in this post.

    Here you are also doing it again: completely overlooking all the times
    someone has agreed with you in some point, and declaring that that
    everyon has a /personal/ bias against you.

    People on these forums have only agreed with
    me at most 1% of the time and the only case

    1% is much larger than zero.

    besides Ben the agreement was on trivialities.

    Grasping trivialities is the extent of your skill, so that's
    all you get.


    You are not following the actual reasoning of the
    paper. You leap to the conclusion that I am wrong.
    That is not you pointing out an error.

    You don't even know what a cycle in the directed
    graph of the evaluation sequence of an expression
    is so you lack any basis to critique this.

    IT IS NOT a triviality.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 13 03:22:29 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-11-13, olcott <polcott333@gmail.com> wrote:
    On 11/12/2025 8:36 PM, Kaz Kylheku wrote:
    On 2025-11-12, olcott <polcott333@gmail.com> wrote:
    On 11/12/2025 12:39 PM, Kaz Kylheku wrote:
    On 2025-11-12, olcott <polcott333@gmail.com> wrote:
    The huge advantages of LLM systems is that they do not
    begin their review on the basis that [Olcott is wrong]
    is an axiom. No humans have ever been able to do this
    in thousands of reviews across dozens of forums.

    Your principal modus operandi is that you reject any piece of evidence >>>> which contradicts your set view in any matter.

    Not at all. Not ever.

    Even now, in this post.

    Here you are also doing it again: completely overlooking all the times >>>> someone has agreed with you in some point, and declaring that that
    everyon has a /personal/ bias against you.

    People on these forums have only agreed with
    me at most 1% of the time and the only case

    1% is much larger than zero.

    besides Ben the agreement was on trivialities.

    Grasping trivialities is the extent of your skill, so that's
    all you get.


    You are not following the actual reasoning of the
    paper. You leap to the conclusion that I am wrong.
    That is not you pointing out an error.

    That's what you did to my code.

    I'm categorically rejecting your paper because it
    is AI slop that takes /zero/ effort to generate,
    but /nonzero/ effort to go through and validate.

    AI generation is a denial-of-service attack
    on people's attention.

    You don't even know what a cycle in the directed
    graph of the evaluation sequence of an expression
    is so you lack any basis to critique this.

    On the contrary, I have designed a lazy evaluating language
    feature that detects cycles in evaluation.

    Live demo:

    Let's start with a good case: z depends on x,
    x depends on y, and y is just 42:

    (mlet ((z (* 2 x))
    (y 42)
    (x (+ 1 y)))
    (list x y z))
    (43 42 86)

    This mlet (magic let, mutual let) construct lets
    you specify mutually dependent variables in any order.

    Now, what if y depends on z? Like y = z / 2?

    (mlet ((z (* 2 x))
    (y (/ z 2))
    (x (+ 1 y)))
    (list x y z))
    ** expr-1:1: force: recursion forcing delayed form (+ 1 y) (expr-1:3)

    The purpose of mlet isn't do to arithmetic formulas in hard-to-follow orders; it's not that sort of shits and giggles.

    What it does is let you instantiate self-referential data structures.

    We turn on circle notation to catch structures with shared substructure including cycles, like circular lists:

    (set *print-circle* t)
    t

    Now, this is the wrong way to try to make the infinite circular
    list (0 1 0 1 0 ...):

    (mlet ((x (cons 0 y))
    (y (cons 1 x)))
    x)
    ** expr-2:1: force: recursion forcing delayed form (cons 0 y) (expr-2:1)

    We need to use the lazy version of cons, the lcons macro, together
    with mlet:

    (mlet ((x (lcons 0 y))
    (y (lcons 1 x)))
    x)
    #1=(0 1 . #1#)

    The notation #1=(0 1 . #1#) represents the circular list It
    has two nodes 0 and 1, and a final CDR field pointing back to the
    first cell.

    We can turn off the circle notation and take sublists of the list:

    (set *print-circle* nil)
    nil
    (mlet ((x (lcons 0 y))
    (y (lcons 1 x)))
    [x 0..20])
    (0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1)
    (mlet ((x (lcons 0 y))
    (y (lcons 1 x)))
    [x 0..30])
    (0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1)

    Self-reference or mutual reference is very useful in lazy computing,
    and the construction of both circular structure and lazy structures,
    without explicit assignment.

    What have you developed other than taking someone else's x86 simulator and adding a bit of code around it?
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 12 22:43:06 2025
    From Newsgroup: comp.ai.philosophy

    On 11/12/2025 9:22 PM, Kaz Kylheku wrote:
    On 2025-11-13, olcott <polcott333@gmail.com> wrote:
    On 11/12/2025 8:36 PM, Kaz Kylheku wrote:
    On 2025-11-12, olcott <polcott333@gmail.com> wrote:
    On 11/12/2025 12:39 PM, Kaz Kylheku wrote:
    On 2025-11-12, olcott <polcott333@gmail.com> wrote:
    The huge advantages of LLM systems is that they do not
    begin their review on the basis that [Olcott is wrong]
    is an axiom. No humans have ever been able to do this
    in thousands of reviews across dozens of forums.

    Your principal modus operandi is that you reject any piece of evidence >>>>> which contradicts your set view in any matter.

    Not at all. Not ever.

    Even now, in this post.

    Here you are also doing it again: completely overlooking all the times >>>>> someone has agreed with you in some point, and declaring that that
    everyon has a /personal/ bias against you.

    People on these forums have only agreed with
    me at most 1% of the time and the only case

    1% is much larger than zero.

    besides Ben the agreement was on trivialities.

    Grasping trivialities is the extent of your skill, so that's
    all you get.


    You are not following the actual reasoning of the
    paper. You leap to the conclusion that I am wrong.
    That is not you pointing out an error.

    That's what you did to my code.


    Your code essentially claims that infinite recursion
    stops when you monkey with it.

    I'm categorically rejecting your paper because it
    is AI slop that takes /zero/ effort to generate,
    but /nonzero/ effort to go through and validate.


    You cannot show evidence of that.
    Do you know what a cycle in the directed graph of
    the evaluation sequence of a formal expression is?

    Mikko didn't have a clue and claimed that I am
    wrong anyway.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 12 22:48:05 2025
    From Newsgroup: comp.ai.philosophy

    On 11/12/2025 8:38 PM, Kaz Kylheku wrote:
    On 2025-11-13, olcott <polcott333@gmail.com> wrote:
    On 11/12/2025 8:22 PM, Kaz Kylheku wrote:
    On 2025-11-12, olcott <polcott333@gmail.com> wrote:
    https://philpapers.org/rec/OLCREO

    This garbage doesn't even feign an attempt at hiding that it's a fucking >>> chat session with Claude AI.


    Of course I don't. It proves that I am
    correct, try and find an actual error.

    No CS academic is going to do anything but swipe left.

    You will have to do your own thinking and writing if you want to be
    taken seriously. Not that that's a guarantee (and we've all seen
    what your own writing looks like).

    I had ChatGPT look at this too.

    Oh well, then, that's an intellectual slam dunk, obviously.


    Unlike humans LLMs don't reject my reasoning out-of-hand
    without even looking at it. Unlike humans these newer
    LLMs can demonstrate the equivalent of deep understanding.
    Two years ago they acted like they had Alzheimer's if you
    exceeded their 3000 word limit. They are much smarter now.
    Do you know what a context window is (without looking it up)?
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 13 04:50:20 2025
    From Newsgroup: comp.ai.philosophy

    On 13/11/2025 02:38, Kaz Kylheku wrote:
    On 2025-11-13, olcott <polcott333@gmail.com> wrote:

    <snip>

    I had ChatGPT look at this too.

    Oh well, then, that's an intellectual slam dunk, obviously.

    To be fair to ChatGPT, if you present it with Olcott's gibberish
    *without* first carefully prepping it with falsehoods like
    'correctly simulates', it unhesitatingly identifies the whole
    thing as a load of gibberish.
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 12 23:00:04 2025
    From Newsgroup: comp.ai.philosophy

    On 11/12/2025 10:50 PM, Richard Heathfield wrote:
    On 13/11/2025 02:38, Kaz Kylheku wrote:
    On 2025-11-13, olcott <polcott333@gmail.com> wrote:

    <snip>

    I had ChatGPT look at this too.

    Oh well, then, that's an intellectual slam dunk, obviously.

    To be fair to ChatGPT, if you present it with Olcott's gibberish
    *without* first carefully prepping it with falsehoods like 'correctly simulates', it unhesitatingly identifies the whole thing as a load of gibberish.


    I don't use the word correctly simulates and more
    because people trying as hard as they can to be
    disagreeable weasel word it to contrive a different
    meaning that D simulated by H according to the
    semantics of the C programming language.

    The biggest problem with the LLMs is they really
    want to guess the wrong answer rather than perform
    the actual execution trace.

    *I did not lead them on in this case at all*

    https://www.researchgate.net/publication/396916355_Halting_Problem_Simulation_in_C
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 13 00:16:08 2025
    From Newsgroup: comp.ai.philosophy

    On 11/12/2025 8:50 PM, Richard Heathfield wrote:
    On 13/11/2025 02:38, Kaz Kylheku wrote:
    On 2025-11-13, olcott <polcott333@gmail.com> wrote:

    <snip>

    I had ChatGPT look at this too.

    Oh well, then, that's an intellectual slam dunk, obviously.

    To be fair to ChatGPT, if you present it with Olcott's gibberish
    *without* first carefully prepping it with falsehoods like 'correctly simulates', it unhesitatingly identifies the whole thing as a load of gibberish.


    Olcott likes to whip, whip, them into shape! Humm... Scary? ;^o
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 13 08:44:50 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-11-13, olcott <polcott333@gmail.com> wrote:
    On 11/12/2025 9:22 PM, Kaz Kylheku wrote:
    On 2025-11-13, olcott <polcott333@gmail.com> wrote:
    On 11/12/2025 8:36 PM, Kaz Kylheku wrote:
    On 2025-11-12, olcott <polcott333@gmail.com> wrote:
    On 11/12/2025 12:39 PM, Kaz Kylheku wrote:
    On 2025-11-12, olcott <polcott333@gmail.com> wrote:
    The huge advantages of LLM systems is that they do not
    begin their review on the basis that [Olcott is wrong]
    is an axiom. No humans have ever been able to do this
    in thousands of reviews across dozens of forums.

    Your principal modus operandi is that you reject any piece of evidence >>>>>> which contradicts your set view in any matter.

    Not at all. Not ever.

    Even now, in this post.

    Here you are also doing it again: completely overlooking all the times >>>>>> someone has agreed with you in some point, and declaring that that >>>>>> everyon has a /personal/ bias against you.

    People on these forums have only agreed with
    me at most 1% of the time and the only case

    1% is much larger than zero.

    besides Ben the agreement was on trivialities.

    Grasping trivialities is the extent of your skill, so that's
    all you get.


    You are not following the actual reasoning of the
    paper. You leap to the conclusion that I am wrong.
    That is not you pointing out an error.

    That's what you did to my code.


    Your code essentially claims that infinite recursion
    stops when you monkey with it.

    You're welcome to point of what exactly you mean by "monkey" and which
    lines of code are doing that.

    Which bits am I flipping that constitute monkeying?

    Remember, the code takes the state of an abandoned simulation
    /exactly/ as it was left by HHH (or whichever decider)

    And then it steps that simulation forward in exactly the correct way,
    the same way that HHH previously stepped it: it passes precisely the
    correct slave_state, and other arguments, to DebugStep.

    The code does not manipulate the content of slave_state other than
    stepping it with DebugStep (your function, the same one used by HHH).
    Between the time HHH abandoned the simulation, and the new dcode
    starts stepping it again, nothing has touched slave_state or
    slave_stack.

    So again, what is monkeying and where is it happening?

    You've had several weeks to Back up your claim ... and nothing.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 13 09:38:59 2025
    From Newsgroup: comp.ai.philosophy

    On 11/13/2025 2:44 AM, Kaz Kylheku wrote:
    On 2025-11-13, olcott <polcott333@gmail.com> wrote:
    On 11/12/2025 9:22 PM, Kaz Kylheku wrote:
    On 2025-11-13, olcott <polcott333@gmail.com> wrote:
    On 11/12/2025 8:36 PM, Kaz Kylheku wrote:
    On 2025-11-12, olcott <polcott333@gmail.com> wrote:
    On 11/12/2025 12:39 PM, Kaz Kylheku wrote:
    On 2025-11-12, olcott <polcott333@gmail.com> wrote:
    The huge advantages of LLM systems is that they do not
    begin their review on the basis that [Olcott is wrong]
    is an axiom. No humans have ever been able to do this
    in thousands of reviews across dozens of forums.

    Your principal modus operandi is that you reject any piece of evidence >>>>>>> which contradicts your set view in any matter.

    Not at all. Not ever.

    Even now, in this post.

    Here you are also doing it again: completely overlooking all the times >>>>>>> someone has agreed with you in some point, and declaring that that >>>>>>> everyon has a /personal/ bias against you.

    People on these forums have only agreed with
    me at most 1% of the time and the only case

    1% is much larger than zero.

    besides Ben the agreement was on trivialities.

    Grasping trivialities is the extent of your skill, so that's
    all you get.


    You are not following the actual reasoning of the
    paper. You leap to the conclusion that I am wrong.
    That is not you pointing out an error.

    That's what you did to my code.


    Your code essentially claims that infinite recursion
    stops when you monkey with it.

    You're welcome to point of what exactly you mean by "monkey" and which
    lines of code are doing that.


    Once D simulated by H correctly matches its correct
    non-halting behavior pattern doing anything besides
    aborting the simulation and rejecting the input is cheating.

    Which bits am I flipping that constitute monkeying?

    Remember, the code takes the state of an abandoned simulation
    /exactly/ as it was left by HHH (or whichever decider)

    And then it steps that simulation forward in exactly the correct way,
    the same way that HHH previously stepped it: it passes precisely the
    correct slave_state, and other arguments, to DebugStep.

    The code does not manipulate the content of slave_state other than
    stepping it with DebugStep (your function, the same one used by HHH).
    Between the time HHH abandoned the simulation, and the new dcode
    starts stepping it again, nothing has touched slave_state or
    slave_stack.

    So again, what is monkeying and where is it happening?

    You've had several weeks to Back up your claim ... and nothing.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 13 18:57:34 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-11-13, olcott <polcott333@gmail.com> wrote:
    On 11/13/2025 2:44 AM, Kaz Kylheku wrote:
    Your code essentially claims that infinite recursion
    stops when you monkey with it.

    By the way, it is mostly your code, and someone elses x86utm.

    You're welcome to point of what exactly you mean by "monkey" and which
    lines of code are doing that.

    Once D simulated by H correctly matches its correct
    non-halting behavior pattern doing anything besides
    aborting the simulation and rejecting the input is cheating.

    Essentially, as the Crowned King of Halting, you are just /decreeing/ an
    edict making it illegal to gather evidence as to whether H made the
    correct decision. Evidence such as looking at the bits H left behind to
    see whether they really comprise the state of non-terminating
    simulation.

    And you think that is how you conduct CS research; like that's how it
    works in academia?

    Moreover, you bemoan people who are "closed-minded" and cling to
    "conventional wisdom" by which they assume you are wrong; yet if those
    people just accepted arbitrary rules about what they may or may not investigate, to avoid producing results displeasing to the King,
    then they are fine intellectuals.

    I don't see what remaining conversation is to be had here.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Fri Nov 14 00:09:31 2025
    From Newsgroup: comp.ai.philosophy

    On 12/11/2025 17:57, olcott wrote:
    On 11/12/2025 8:45 AM, olcott wrote:

    Noisy:

    This sentence is not true: "This sentence is not true"
    is true only because the inner sentence is semantically
    unsound.

    Woah! Because your post provides some meaning for the interpretation of
    your paper I think the above needs to be addressed.

    We have the name of a sentence "This sentence is not true"
    We have a sentence about it:

    "This sentence is not true" is true only because the inner sentence is semantically unsound.

    And we have a sentence that is constructed like a lambda expression but
    using something a bit like a de bruijn reference turned inside out:

    This sentence is not true: "This sentence is not true" is true only
    because the inner sentence is semantically unsound.

    but with a noisy surrounding fluff beta-ish-reducing to:

    ["This sentence is not true" is true only because the inner sentence is semantically unsound] is not true

    in there we still have the foremost interesting
    outie-de-bruijn-ish-innie reference "the inner sentence" which I suppose therein refers to the referent of the unique syntactically most
    contained nominal phrase that references a sentence, to wit, the
    referent of "This sentence" in "This sentence is not true" in the Noisy.

    then the whole says it's not true that some purported semantic
    unsoundness of what that reference refers to is the sole basis for
    inferring some unstated notion of nontruth about the referent of "This sentence" in "This sentence is not true" in the Noisy.


    Have I understood what you're saying, minister?

    [Why "minister", see https://www.youtube.com/watch?v=qVO85anasrA ]


    The inner sentence is formalized in Minimal
    Type Theory as LP := ~True(LP).
    (where A := B means A is defined as B).

    I very much doubt that but you filled my head with noise. Does your
    Noisy (above) really constrain "This sentence is not true" such that
    your MTT formalisation of it is accurate? You should be noiselessly
    patient, so I should expect so, but I'd like to read that you think it
    was all suitably constraining wordage before I think any further because
    I think it's not an accurate formalisation ("encoding").


    More pedestrianly: Is := symmetric? ie, does A := B entail B := A and B
    := A entail A := B (in the formation rules of sentences of MTT from
    other sentences of MTT)?


    https://philpapers.org/rec/OLCREO

    Can someone review my actual reasoning
    elaborated in the paper?

    No: it is an AI chatlog which is shit.



    The huge advantages of LLM systems is that they do not
    begin their review on the basis that [Olcott is wrong]
    is an axiom. No humans have ever been able to do this
    in thousands of reviews across dozens of forums.

    I typically begin "everyone is wrong, but /how/ exactly" until I can't
    justify my premise any more. Then I stay quiet until I wake up suddenly realising someone was particularly right in some small way, then I
    begrudgingly ackowledge that publically, and encourage the other wrong
    people to find a way to correct my temporary wrong-blindness.

    That is on the basis that of all the things that can be said almost none
    of them are right with just a few exceptional isolated points dotted around.


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 13 18:45:49 2025
    From Newsgroup: comp.ai.philosophy

    On 11/13/2025 6:09 PM, Tristan Wibberley wrote:
    On 12/11/2025 17:57, olcott wrote:
    On 11/12/2025 8:45 AM, olcott wrote:

    Noisy:

    This sentence is not true: "This sentence is not true"
    is true only because the inner sentence is semantically
    unsound.

    Woah! Because your post provides some meaning for the interpretation of
    your paper I think the above needs to be addressed.

    We have the name of a sentence "This sentence is not true"
    We have a sentence about it:

    "This sentence is not true" is true only because the inner sentence is semantically unsound.

    And we have a sentence that is constructed like a lambda expression but
    using something a bit like a de bruijn reference turned inside out:

    This sentence is not true: "This sentence is not true" is true only
    because the inner sentence is semantically unsound.

    but with a noisy surrounding fluff beta-ish-reducing to:

    ["This sentence is not true" is true only because the inner sentence is semantically unsound] is not true


    This sentence is not true: "This sentence is not true" is true.
    You can't put the quotes in a different place without changing
    the semantics.

    Here is "This sentence is not true" is semantically
    unsound in Prolog:

    ?- LP = not(true(LP)).
    LP = not(true(LP)).

    ?- unify_with_occurs_check(LP, not(true(LP))).
    false.


    in there we still have the foremost interesting
    outie-de-bruijn-ish-innie reference "the inner sentence" which I suppose therein refers to the referent of the unique syntactically most
    contained nominal phrase that references a sentence, to wit, the
    referent of "This sentence" in "This sentence is not true" in the Noisy.


    Its an example of not needing a separate object
    language and meta-language that Tarski says is
    required.

    then the whole says it's not true that some purported semantic
    unsoundness of what that reference refers to is the sole basis for
    inferring some unstated notion of nontruth about the referent of "This sentence" in "This sentence is not true" in the Noisy.


    Have I understood what you're saying, minister?

    [Why "minister", see https://www.youtube.com/watch?v=qVO85anasrA ]


    The inner sentence is formalized in Minimal
    Type Theory as LP := ~True(LP).
    (where A := B means A is defined as B).

    I very much doubt that but you filled my head with noise. Does your
    Noisy (above) really constrain "This sentence is not true" such that
    your MTT formalisation of it is accurate? You should be noiselessly
    patient, so I should expect so, but I'd like to read that you think it
    was all suitably constraining wordage before I think any further because
    I think it's not an accurate formalisation ("encoding").


    More pedestrianly: Is := symmetric? ie, does A := B entail B := A and B
    := A entail A := B (in the formation rules of sentences of MTT from
    other sentences of MTT)?


    https://philpapers.org/rec/OLCREO

    Can someone review my actual reasoning
    elaborated in the paper?

    No: it is an AI chatlog which is shit.


    It is all my words and my ideas and Claude AI's
    assessment of them. Unlike anyone anywhere else
    LLMs do demonstrate the functional equivalent
    of deep understanding of the notion of:

    cycles in the directed graph of evaluation sequence
    of a formal expression

    Everyone here says I am wrong I am wrong and I am
    wrong never understanding a single word that I said.



    The huge advantages of LLM systems is that they do not
    begin their review on the basis that [Olcott is wrong]
    is an axiom. No humans have ever been able to do this
    in thousands of reviews across dozens of forums.

    I typically begin "everyone is wrong, but /how/ exactly" until I can't justify my premise any more. Then I stay quiet until I wake up suddenly realising someone was particularly right in some small way, then I begrudgingly ackowledge that publically, and encourage the other wrong
    people to find a way to correct my temporary wrong-blindness.

    That is on the basis that of all the things that can be said almost none
    of them are right with just a few exceptional isolated points dotted around.


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Fri Nov 14 01:02:52 2025
    From Newsgroup: comp.ai.philosophy

    On 14/11/2025 00:45, olcott wrote:
    On 11/13/2025 6:09 PM, Tristan Wibberley wrote:
    On 12/11/2025 17:57, olcott wrote:
    On 11/12/2025 8:45 AM, olcott wrote:

    Noisy:

    This sentence is not true: "This sentence is not true"
    is true only because the inner sentence is semantically
    unsound.

    Woah! Because your post provides some meaning for the interpretation of
    your paper I think the above needs to be addressed.

    We have the name of a sentence "This sentence is not true"
    We have a sentence about it:

      "This sentence is not true" is true only because the inner sentence is
    semantically unsound.

    And we have a sentence that is constructed like a lambda expression but
    using something a bit like a de bruijn reference turned inside out:

      This sentence is not true: "This sentence is not true" is true only
    because the inner sentence is semantically unsound.

    but with a noisy surrounding fluff beta-ish-reducing to:

      ["This sentence is not true" is true only because the inner sentence is >> semantically unsound] is not true


    This sentence is not true: "This sentence is not true" is true.
    You can't put the quotes in a different place without changing
    the semantics.

    You haven't tried to understand what I wrote. You have just guessed at a response.


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Fri Nov 14 01:14:10 2025
    From Newsgroup: comp.ai.philosophy

    On 14/11/2025 00:45, olcott wrote:
    ... [is] an example of not needing a separate object
    language and meta-language that Tarski says is
    required.

    You've got that unqualified. Did Tarski say it unqualified?

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 13 20:29:59 2025
    From Newsgroup: comp.ai.philosophy

    On 11/13/2025 7:02 PM, Tristan Wibberley wrote:
    On 14/11/2025 00:45, olcott wrote:
    On 11/13/2025 6:09 PM, Tristan Wibberley wrote:
    On 12/11/2025 17:57, olcott wrote:
    On 11/12/2025 8:45 AM, olcott wrote:

    Noisy:

    This sentence is not true: "This sentence is not true"
    is true only because the inner sentence is semantically
    unsound.

    Woah! Because your post provides some meaning for the interpretation of
    your paper I think the above needs to be addressed.

    We have the name of a sentence "This sentence is not true"
    We have a sentence about it:

      "This sentence is not true" is true only because the inner sentence is >>> semantically unsound.

    And we have a sentence that is constructed like a lambda expression but
    using something a bit like a de bruijn reference turned inside out:

      This sentence is not true: "This sentence is not true" is true only
    because the inner sentence is semantically unsound.

    but with a noisy surrounding fluff beta-ish-reducing to:

      ["This sentence is not true" is true only because the inner sentence is >>> semantically unsound] is not true


    This sentence is not true: "This sentence is not true" is true.
    You can't put the quotes in a different place without changing
    the semantics.

    You haven't tried to understand what I wrote. You have just guessed at a response.


    This sentence is not true: "This sentence is not true" is true
    is the essence of the Tarski Undefinability theorem.

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 13 20:33:14 2025
    From Newsgroup: comp.ai.philosophy

    On 11/13/2025 7:14 PM, Tristan Wibberley wrote:
    On 14/11/2025 00:45, olcott wrote:
    ... [is] an example of not needing a separate object
    language and meta-language that Tarski says is
    required.

    You've got that unqualified. Did Tarski say it unqualified?


    To express the entire body of knowledge that can
    be expressed in language only a single language
    is needed. Tarski disagreed and he was wrong and
    I showed how and why with my discussion with Claude.

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Fri Nov 14 13:09:28 2025
    From Newsgroup: comp.ai.philosophy

    On 14/11/2025 02:29, olcott wrote:

    This sentence is not true: "This sentence is not true" is true
    is the essence of the Tarski Undefinability theorem.

    Once again, you've just guessed at a response instead of trying to
    understand.


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Fri Nov 14 07:42:23 2025
    From Newsgroup: comp.ai.philosophy

    On 11/14/2025 7:09 AM, Tristan Wibberley wrote:
    On 14/11/2025 02:29, olcott wrote:

    This sentence is not true: "This sentence is not true" is true
    is the essence of the Tarski Undefinability theorem.

    Once again, you've just guessed at a response instead of trying to understand.


    This sentence is not true: "This sentence is not true" is true.

    It took me four years to simplify Tarski's proof
    down to that.

    Here is his actual proof
    https://liarparadox.org/Tarski_247_248.pdf https://liarparadox.org/Tarski_275_276.pdf

    Here is what cancels his proof

    ?- LP = not(true(LP)).
    LP = not(true(LP)).
    ?- unify_with_occurs_check(LP, not(true(LP))).
    false.




    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Fri Nov 14 10:45:06 2025
    From Newsgroup: comp.ai.philosophy

    On 11/13/2025 6:09 PM, Tristan Wibberley wrote:
    On 12/11/2025 17:57, olcott wrote:
    On 11/12/2025 8:45 AM, olcott wrote:

    Noisy:

    This sentence is not true: "This sentence is not true"
    is true only because the inner sentence is semantically
    unsound.

    Woah! Because your post provides some meaning for the interpretation of
    your paper I think the above needs to be addressed.

    We have the name of a sentence "This sentence is not true"
    We have a sentence about it:

    "This sentence is not true" is true only because the inner sentence is semantically unsound.

    And we have a sentence that is constructed like a lambda expression but
    using something a bit like a de bruijn reference turned inside out:

    This sentence is not true: "This sentence is not true" is true only
    because the inner sentence is semantically unsound.

    but with a noisy surrounding fluff beta-ish-reducing to:

    ["This sentence is not true" is true only because the inner sentence is semantically unsound] is not true


    This wording is not precisely accurate. People could
    get confused and think that not true means false.
    In the case of the Liar Paradox not true means semantic
    gibberish.

    in there we still have the foremost interesting
    outie-de-bruijn-ish-innie reference "the inner sentence" which I suppose therein refers to the referent of the unique syntactically most
    contained nominal phrase that references a sentence, to wit, the
    referent of "This sentence" in "This sentence is not true" in the Noisy.


    It is not at all noisy. It merely states the key essence.

    then the whole says it's not true that some purported semantic
    unsoundness of what that reference refers to is the sole basis for
    inferring some unstated notion of nontruth about the referent of "This sentence" in "This sentence is not true" in the Noisy.


    It has taken humanity 2000 years and there are
    currently zero accepted resolutions.

    Saying that the liar paradox is untrue keeps the
    confusion going. Saying that it is semantic gibberish
    stops the confusion. Saying it this way is not noisy
    is the the succinct key essence of its resolution.


    Have I understood what you're saying, minister?

    [Why "minister", see https://www.youtube.com/watch?v=qVO85anasrA ]


    The inner sentence is formalized in Minimal
    Type Theory as LP := ~True(LP).
    (where A := B means A is defined as B).

    I very much doubt that but you filled my head with noise. Does your
    Noisy (above) really constrain "This sentence is not true" such that
    your MTT formalisation of it is accurate?

    It as the Prolog proves the the evaluation is
    stuck in an infinite evaluation loop.

    LP := ~True(LP)
    00 ~ 01
    01 True 00 // cycle

    ?- LP = not(true(LP)).
    LP = not(true(LP)).
    ?- unify_with_occurs_check(LP, not(true(LP))).
    false.



    You should be noiselessly
    patient, so I should expect so, but I'd like to read that you think it
    was all suitably constraining wordage before I think any further because
    I think it's not an accurate formalisation ("encoding").


    More pedestrianly: Is := symmetric? ie, does A := B entail B := A and B
    := A entail A := B (in the formation rules of sentences of MTT from
    other sentences of MTT)?


    https://philpapers.org/rec/OLCREO

    Can someone review my actual reasoning
    elaborated in the paper?

    No: it is an AI chatlog which is shit.


    You say this without even looking at the conversation
    thus your assessment has no basis. If you had a basis
    then you could point out specific errors.

    LLM systems have become 67-fold smarter in the last
    24 months in that the size of their context window
    has increased that much.



    The huge advantages of LLM systems is that they do not
    begin their review on the basis that [Olcott is wrong]
    is an axiom. No humans have ever been able to do this
    in thousands of reviews across dozens of forums.

    I typically begin "everyone is wrong, but /how/ exactly" until I can't justify my premise any more. Then I stay quiet until I wake up suddenly realising someone was particularly right in some small way, then I begrudgingly ackowledge that publically, and encourage the other wrong
    people to find a way to correct my temporary wrong-blindness.

    That is on the basis that of all the things that can be said almost none
    of them are right with just a few exceptional isolated points dotted around.


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Fri Nov 14 14:30:12 2025
    From Newsgroup: comp.ai.philosophy

    On 11/14/2025 2:09 PM, Kaz Kylheku wrote:
    On 2025-11-14, Alan Mackenzie <acm@muc.de> wrote:
    olcott <polcott333@gmail.com> wrote:

    99% of experts will reject something that does not conform
    to convention wisdom without even looking at it.

    They've got better things to do with their time than continually refuting
    falsehoods which contradict proven basics.

    LLM systems will look at something that does not conform
    to conventional wisdom and form their own proof that this
    idea is correct showing every detail of every step of this proof.

    https://www.researchgate.net/publication/396916355_Halting_Problem_Simulation_in_C

    Then why are you posting on Usenet, where people aren't writing what you
    want them to write? Why not stick to these LLM sysems which do reply
    what you want them to reply?

    Because he knows they are bullshit that is programmed to agreew with
    the user if the user persists in fighting through pushback.


    Once an LLM proves that I am correct and that everyone
    else doesn't have a clue I need to make my words clear
    enough so that they can be understood by human reviewers.

    I won't directly get credibility from LLMs until everyone
    trusts them. Because of LLMs I can test and retest my words
    to find the most succinct combination that completely
    proves my point.

    People here would much rather assume that they are already
    correct than to bother verifying anything.

    The early versions of GPT-4 integrated into Microsoft Edge were
    better! That was programmed to detect argumentative cranks and
    end the conversation.


    Current LLMs can follow reasoning and anchor this reasoning
    to well known facts proving that this reasoning is sound.

    If was an essential feature that should continue to be implemented
    in new LLM chat agents, in spite of more generous token limits.

    Even in paid service, for that matter.

    If the user is persisting thorugh more than three or four rounds
    of factual pushback, "This conversation is not productive; perhaps
    I can help you with something else" and that's it.


    Except that every push-back is addressed with an increasingly
    deeper understanding of my view that it eventually agrees with.
    There was a lot of push-back in the dialogue:

    https://www.researchgate.net/publication/397442168_How_pathological_self-reference_is_confused_with_undecidability

    If you weren't so damned sure that I must be wrong
    you could see that.

    Cranks like Olcott would get squat all agreement out of that.

    Chat AI that talks endlessly and lets itself be overwhelmed
    is a public disservice. It's not as egregious as supporting someone
    in planning to harm oneself or others, but it's in the same vein.
    Agreeing with someone's bullshit after forty rounds is a palpable perpetration of social harm.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alan Mackenzie@acm@muc.de to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Fri Nov 14 20:43:37 2025
    From Newsgroup: comp.ai.philosophy

    [ Followup-To: set ]

    In comp.theory olcott <polcott333@gmail.com> wrote:

    [ .... ]

    People here would much rather assume that they are already
    correct than to bother verifying anything.

    People here verified the proofs of things like the Halting Theorem years
    ago, if not decades ago. That verification remains eternally valid.

    [ .... ]

    If you weren't so damned sure that I must be wrong
    you could see that.

    I'm not damned sure you're wrong; I know it for an absolutely proven
    fact. Having verified a mathematical proof of a theorem, there is no
    need to even look at your arguments disagreeing with it. People do,
    though. See above.

    [ .... ]

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --
    Alan Mackenzie (Nuremberg, Germany).

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Sun Nov 16 14:37:07 2025
    From Newsgroup: comp.ai.philosophy

    On 11/16/2025 2:55 AM, Mikko wrote:
    On 2025-11-15 15:51:45 +0000, olcott said:

    On 11/15/2025 3:56 AM, Mikko wrote:
    On 2025-11-14 14:33:11 +0000, olcott said:

    On 11/14/2025 2:53 AM, Mikko wrote:
    On 2025-11-13 16:06:50 +0000, olcott said:

    On 11/13/2025 3:18 AM, Mikko wrote:
    On 2025-11-12 18:12:44 +0000, Alan Mackenzie said:

    [ Followup-To: set ]

    In comp.theory olcott <polcott333@gmail.com> wrote:

    [ .... ]

    The huge advantages of LLM systems is that they do not
    begin their review on the basis that [Olcott is wrong]
    is an axiom. No humans have ever been able to do this
    in thousands of reviews across dozens of forums.

    The huge disadvantage of LLM systems is that they begin their >>>>>>>> review on
    the basis that Olcott is right.  Intelligent people do not do this. >>>>>>>> They evaluate what Olcott has written and pronounce it either >>>>>>>> right or
    (much more usually) wrong.

    Honest intelligent people don't pronounce anything they don't
    have seen
    before right. The nearest they can say is "no obvious errors" or >>>>>>> "looks
    good" or something that means the same. To actually check something >>>>>>> takes more time and work.

    Most people are sheep when they see something that does
    not conform to conventional wisdom they reject it.

    Syntax error. There are three clauses but it is not clear which words >>>>> belong to which.

    It is not a good idea to reject conventional or other wisdom without >>>>> a good reason. Even with a good reason it is not a good idea to reject >>>>> more than what the good reason requires.

    99% of experts will reject something that does not conform
    to convention wisdom without even looking at it.

    How many experts you asked? If you ony asked 100 experts then 99% is an
    inaccurate result.

    I asked about 200 experts in dozens of different forums
    and all of them rejected my ideas out-of-hand without
    even looking at them.

    How did you determine "without eve looking at them"?


    A tenured PhD computer science professor
    has a very well documented equivalent
    experience many different times in many
    different ways.

    "Something is wrong with the halting problem"
    is immediately translated into {crackpot}.

    The way that I can tell in my hundreds of
    cases is that they said I was wrong and
    never provided any reasoning what-so-ever
    about how and why I was wrong.

    Technical people in the fields of computer
    science, math and logic has an emotional
    attachment to the foundational assumptions
    that is equivalent to a religion.

    On the sole basis that the reasoning is correct
    within these foundational assumptions they
    construe this as absolute proof that these
    assumptions are true.

    It is like they don't have a clue that sound
    deduction is not the same as valid deduction.

    A deductive argument is sound if and only if
    it is both valid, and all of its premises are
    actually true. Otherwise, a deductive argument
    is unsound. https://iep.utm.edu/val-snd/
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Nov 26 09:27:06 2025
    From Newsgroup: comp.ai.philosophy

    On 11/26/2025 4:30 AM, Mikko wrote:
    olcott kirjoitti 14.11.2025 klo 16.42:
    On 11/14/2025 3:01 AM, Mikko wrote:
    On 2025-11-13 16:00:58 +0000, olcott said:

    On 11/13/2025 3:05 AM, Mikko wrote:
    On 2025-11-12 14:45:34 +0000, olcott said:

    Rejecting expressions of formal language
    having pathological self-reference

    Explained how expressions with pathological self
    reference can simply be rejected as semantically/
    syntactically unsound thus preventing undefinability,
    and undecidability.

    This sentence is not true: "This sentence is not true"
    is true only because the inner sentence is semantically
    unsound. The inner sentence is formalized in Minimal
    Type Theory as LP := ~True(LP).
    (where A := B means A is defined as B).

    https://philpapers.org/rec/OLCREO

    Can someone review my actual reasoning
    elaborated in the paper?

    If you want to use the term "formal language" you must prove that
    there is a Turing machine that can determine whether a string is a
    valid sentence of your language. If no such Turing machine exists
    you have no justifiction for the use of the word "formal".

    Been there done that and provided all the details.

    Where? At least not where the above pointer points to.


    In the paper that you failed to read.
    https://www.researchgate.net/
    publication/397533139_Rejecting_expressions_of_formal_language_having_pathological_self-reference

    In that article there is no proof that there is no proof that there is a Turing machine that can determine whether a string is a valid sentence
    of your lanugage. The article does not even mention Turing machine.


    It is all the deep meaning of unify_with_occurs_check()
    that rejects an expression as semantically unsound
    because its evaluation is stuck in an infinite loop.


    The Liar Paradox formalized in the Prolog Programming language

    This sentence is not true.
    It is not true about what?
    It is not true about being not true.
    It is not true about being not true about what?
    It is not true about being not true about being not true.
    Oh I see you are stuck in a loop!

    This is formalized in the Prolog programming language
    ?- LP = not(true(LP)).
    LP = not(true(LP)).
    ?- unify_with_occurs_check(LP, not(true(LP))).
    false.

    Failing an occurs check seems to mean that the resolution of an
    expression remains stuck in an infinite loop. Just as the formalized
    Prolog determines that there is a cycle in the directed graph of the evaluation sequence of LP the simple English proves that the Liar
    Paradox never gets to the point. It has merely been semantically unsound
    all these years.
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.

    This required establishing a new foundation
    for correct reasoning.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Nov 26 19:46:27 2025
    From Newsgroup: comp.ai.philosophy

    On 26/11/2025 15:27, olcott wrote:
    This is formalized in the Prolog programming language
    ?- LP = not(true(LP)).
    LP = not(true(LP)).
    ?- unify_with_occurs_check(LP, not(true(LP))).
    false.

    Why do you keep posting that?
    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Nov 26 14:07:35 2025
    From Newsgroup: comp.ai.philosophy

    On 11/26/2025 1:46 PM, Tristan Wibberley wrote:
    On 26/11/2025 15:27, olcott wrote:
    This is formalized in the Prolog programming language
    ?- LP = not(true(LP)).
    LP = not(true(LP)).
    ?- unify_with_occurs_check(LP, not(true(LP))).
    false.

    Why do you keep posting that?


    Until it is accepted or someone correctly shows
    how it does not once-and-for-all resolve
    the Liar Paradox I will keep posting it.

    The ONLY way that I can possibly have
    any success with these kind of things
    is to have 1000-fold more persistence
    than the next most persistent person.
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.

    This required establishing a new foundation
    for correct reasoning.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Nov 26 21:00:28 2025
    From Newsgroup: comp.ai.philosophy

    On 11/26/25 3:07 PM, olcott wrote:
    On 11/26/2025 1:46 PM, Tristan Wibberley wrote:
    On 26/11/2025 15:27, olcott wrote:
    This is formalized in the Prolog programming language
    ?- LP = not(true(LP)).
    LP = not(true(LP)).
    ?- unify_with_occurs_check(LP, not(true(LP))).
    false.

    Why do you keep posting that?


    Until it is accepted or someone correctly shows
    how it does not once-and-for-all resolve
    the Liar Paradox I will keep posting it.

    The ONLY way that I can possibly have
    any success with these kind of things
    is to have 1000-fold more persistence
    than the next most persistent person.



    But since the statement isn't the statment G of Godel, it is just a red herring.

    The problem is that Prolog just can't handle the logic of Godel, and it
    seems neither can you.

    It seems, you just don't understand how logic works, but just try to
    flim-flam people with fancy words that you just don't understand.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Nov 27 10:20:46 2025
    From Newsgroup: comp.ai.philosophy

    olcott kirjoitti 26.11.2025 klo 17.27:
    On 11/26/2025 4:30 AM, Mikko wrote:
    olcott kirjoitti 14.11.2025 klo 16.42:
    On 11/14/2025 3:01 AM, Mikko wrote:
    On 2025-11-13 16:00:58 +0000, olcott said:

    On 11/13/2025 3:05 AM, Mikko wrote:
    On 2025-11-12 14:45:34 +0000, olcott said:

    Rejecting expressions of formal language
    having pathological self-reference

    Explained how expressions with pathological self
    reference can simply be rejected as semantically/
    syntactically unsound thus preventing undefinability,
    and undecidability.

    This sentence is not true: "This sentence is not true"
    is true only because the inner sentence is semantically
    unsound. The inner sentence is formalized in Minimal
    Type Theory as LP := ~True(LP).
    (where A := B means A is defined as B).

    https://philpapers.org/rec/OLCREO

    Can someone review my actual reasoning
    elaborated in the paper?

    If you want to use the term "formal language" you must prove that
    there is a Turing machine that can determine whether a string is a >>>>>> valid sentence of your language. If no such Turing machine exists
    you have no justifiction for the use of the word "formal".

    Been there done that and provided all the details.

    Where? At least not where the above pointer points to.


    In the paper that you failed to read.
    https://www.researchgate.net/
    publication/397533139_Rejecting_expressions_of_formal_language_having_pathological_self-reference

    In that article there is no proof that there is no proof that there is a
    Turing machine that can determine whether a string is a valid sentence
    of your lanugage. The article does not even mention Turing machine.


    It is all the deep meaning of unify_with_occurs_check()
    that rejects an expression as semantically unsound
    because its evaluation is stuck in an infinite loop.


    The Liar Paradox formalized in the Prolog Programming language

    This sentence is not true.
    It is not true about what?
    It is not true about being not true.
    It is not true about being not true about what?
    It is not true about being not true about being not true.
    Oh I see you are stuck in a loop!

    This is formalized in the Prolog programming language
    ?- LP = not(true(LP)).
    LP = not(true(LP)).
    ?- unify_with_occurs_check(LP, not(true(LP))).
    false.

    Failing an occurs check seems to mean that the resolution of an
    expression remains stuck in an infinite loop. Just as the formalized
    Prolog determines that there is a cycle in the directed graph of the evaluation sequence of LP the simple English proves that the Liar
    Paradox never gets to the point. It has merely been semantically unsound
    all these years.

    All ot that is irrelevant to your lie about where the proof is.
    That you chose to lie about it and then to distract is a strong
    indication that you have no proof but prefer to lie otherwise.
    --
    Mikko
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Nov 27 10:22:40 2025
    From Newsgroup: comp.ai.philosophy

    Tristan Wibberley kirjoitti 26.11.2025 klo 21.46:
    On 26/11/2025 15:27, olcott wrote:
    This is formalized in the Prolog programming language
    ?- LP = not(true(LP)).
    LP = not(true(LP)).
    ?- unify_with_occurs_check(LP, not(true(LP))).
    false.

    Why do you keep posting that?

    In order to distract from the facts he doesn't want to face.
    --
    Mikko
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Nov 27 00:39:24 2025
    From Newsgroup: comp.ai.philosophy

    On 11/27/2025 12:22 AM, Mikko wrote:
    Tristan Wibberley kirjoitti 26.11.2025 klo 21.46:
    On 26/11/2025 15:27, olcott wrote:
    This is formalized in the Prolog programming language
    ?- LP = not(true(LP)).
    LP = not(true(LP)).
    ?- unify_with_occurs_check(LP, not(true(LP))).
    false.

    Why do you keep posting that?

    In order to distract from the facts he doesn't want to face.


    ditto.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Nov 27 09:49:41 2025
    From Newsgroup: comp.ai.philosophy

    On 11/27/2025 2:20 AM, Mikko wrote:
    olcott kirjoitti 26.11.2025 klo 17.27:
    On 11/26/2025 4:30 AM, Mikko wrote:
    olcott kirjoitti 14.11.2025 klo 16.42:
    On 11/14/2025 3:01 AM, Mikko wrote:
    On 2025-11-13 16:00:58 +0000, olcott said:

    On 11/13/2025 3:05 AM, Mikko wrote:
    On 2025-11-12 14:45:34 +0000, olcott said:

    Rejecting expressions of formal language
    having pathological self-reference

    Explained how expressions with pathological self
    reference can simply be rejected as semantically/
    syntactically unsound thus preventing undefinability,
    and undecidability.

    This sentence is not true: "This sentence is not true"
    is true only because the inner sentence is semantically
    unsound. The inner sentence is formalized in Minimal
    Type Theory as LP := ~True(LP).
    (where A := B means A is defined as B).

    https://philpapers.org/rec/OLCREO

    Can someone review my actual reasoning
    elaborated in the paper?

    If you want to use the term "formal language" you must prove that >>>>>>> there is a Turing machine that can determine whether a string is a >>>>>>> valid sentence of your language. If no such Turing machine exists >>>>>>> you have no justifiction for the use of the word "formal".

    Been there done that and provided all the details.

    Where? At least not where the above pointer points to.


    In the paper that you failed to read.
    https://www.researchgate.net/
    publication/397533139_Rejecting_expressions_of_formal_language_having_pathological_self-reference

    In that article there is no proof that there is no proof that there is a >>> Turing machine that can determine whether a string is a valid sentence
    of your lanugage. The article does not even mention Turing machine.


    It is all the deep meaning of unify_with_occurs_check()
    that rejects an expression as semantically unsound
    because its evaluation is stuck in an infinite loop.


    The Liar Paradox formalized in the Prolog Programming language

    This sentence is not true.
    It is not true about what?
    It is not true about being not true.
    It is not true about being not true about what?
    It is not true about being not true about being not true.
    Oh I see you are stuck in a loop!

    This is formalized in the Prolog programming language
    ?- LP = not(true(LP)).
    LP = not(true(LP)).
    ?- unify_with_occurs_check(LP, not(true(LP))).
    false.

    Failing an occurs check seems to mean that the resolution of an
    expression remains stuck in an infinite loop. Just as the formalized
    Prolog determines that there is a cycle in the directed graph of the
    evaluation sequence of LP the simple English proves that the Liar
    Paradox never gets to the point. It has merely been semantically
    unsound all these years.

    All ot that is irrelevant to your lie about where the proof is.
    That you chose to lie about it and then to distract is a strong
    indication that you have no proof but prefer to lie otherwise.


    Formalized in Olcott's Minimal Type Theory
    LP := ~True(LP) // LP {is defined as} ~True(LP)
    that expands to ~True(~True(~True(~True(~True(~True(...)))))) https://philarchive.org/archive/PETMTT-4v2

    The Liar Paradox fails because its specifies
    infinite recursion.
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.

    This required establishing a new foundation
    for correct reasoning.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Nov 27 12:27:11 2025
    From Newsgroup: comp.ai.philosophy

    On 11/27/25 10:49 AM, olcott wrote:
    On 11/27/2025 2:20 AM, Mikko wrote:
    olcott kirjoitti 26.11.2025 klo 17.27:
    On 11/26/2025 4:30 AM, Mikko wrote:
    olcott kirjoitti 14.11.2025 klo 16.42:
    On 11/14/2025 3:01 AM, Mikko wrote:
    On 2025-11-13 16:00:58 +0000, olcott said:

    On 11/13/2025 3:05 AM, Mikko wrote:
    On 2025-11-12 14:45:34 +0000, olcott said:

    Rejecting expressions of formal language
    having pathological self-reference

    Explained how expressions with pathological self
    reference can simply be rejected as semantically/
    syntactically unsound thus preventing undefinability,
    and undecidability.

    This sentence is not true: "This sentence is not true"
    is true only because the inner sentence is semantically
    unsound. The inner sentence is formalized in Minimal
    Type Theory as LP := ~True(LP).
    (where A := B means A is defined as B).

    https://philpapers.org/rec/OLCREO

    Can someone review my actual reasoning
    elaborated in the paper?

    If you want to use the term "formal language" you must prove that >>>>>>>> there is a Turing machine that can determine whether a string is a >>>>>>>> valid sentence of your language. If no such Turing machine exists >>>>>>>> you have no justifiction for the use of the word "formal".

    Been there done that and provided all the details.

    Where? At least not where the above pointer points to.


    In the paper that you failed to read.
    https://www.researchgate.net/
    publication/397533139_Rejecting_expressions_of_formal_language_having_pathological_self-reference

    In that article there is no proof that there is no proof that there
    is a
    Turing machine that can determine whether a string is a valid sentence >>>> of your lanugage. The article does not even mention Turing machine.


    It is all the deep meaning of unify_with_occurs_check()
    that rejects an expression as semantically unsound
    because its evaluation is stuck in an infinite loop.


    The Liar Paradox formalized in the Prolog Programming language

    This sentence is not true.
    It is not true about what?
    It is not true about being not true.
    It is not true about being not true about what?
    It is not true about being not true about being not true.
    Oh I see you are stuck in a loop!

    This is formalized in the Prolog programming language
    ?- LP = not(true(LP)).
    LP = not(true(LP)).
    ?- unify_with_occurs_check(LP, not(true(LP))).
    false.

    Failing an occurs check seems to mean that the resolution of an
    expression remains stuck in an infinite loop. Just as the formalized
    Prolog determines that there is a cycle in the directed graph of the
    evaluation sequence of LP the simple English proves that the Liar
    Paradox never gets to the point. It has merely been semantically
    unsound all these years.

    All ot that is irrelevant to your lie about where the proof is.
    That you chose to lie about it and then to distract is a strong
    indication that you have no proof but prefer to lie otherwise.


    Formalized in Olcott's Minimal Type Theory
    LP := ~True(LP)    // LP {is defined as} ~True(LP)
    that expands to ~True(~True(~True(~True(~True(~True(...)))))) https://philarchive.org/archive/PETMTT-4v2

    The Liar Paradox fails because its specifies
    infinite recursion.


    In other words, Olcott's Minimal Type Theory is insufficient to express
    some statements that have logical values.

    Sorry, all you are doing is proving that you don't know what you are
    talking about.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Fri Nov 28 10:45:01 2025
    From Newsgroup: comp.ai.philosophy

    olcott kirjoitti 27.11.2025 klo 17.49:
    On 11/27/2025 2:20 AM, Mikko wrote:
    olcott kirjoitti 26.11.2025 klo 17.27:
    On 11/26/2025 4:30 AM, Mikko wrote:
    olcott kirjoitti 14.11.2025 klo 16.42:
    On 11/14/2025 3:01 AM, Mikko wrote:
    On 2025-11-13 16:00:58 +0000, olcott said:

    On 11/13/2025 3:05 AM, Mikko wrote:
    On 2025-11-12 14:45:34 +0000, olcott said:

    Rejecting expressions of formal language
    having pathological self-reference

    Explained how expressions with pathological self
    reference can simply be rejected as semantically/
    syntactically unsound thus preventing undefinability,
    and undecidability.

    This sentence is not true: "This sentence is not true"
    is true only because the inner sentence is semantically
    unsound. The inner sentence is formalized in Minimal
    Type Theory as LP := ~True(LP).
    (where A := B means A is defined as B).

    https://philpapers.org/rec/OLCREO

    Can someone review my actual reasoning
    elaborated in the paper?

    If you want to use the term "formal language" you must prove that >>>>>>>> there is a Turing machine that can determine whether a string is a >>>>>>>> valid sentence of your language. If no such Turing machine exists >>>>>>>> you have no justifiction for the use of the word "formal".

    Been there done that and provided all the details.

    Where? At least not where the above pointer points to.


    In the paper that you failed to read.
    https://www.researchgate.net/
    publication/397533139_Rejecting_expressions_of_formal_language_having_pathological_self-reference

    In that article there is no proof that there is no proof that there
    is a
    Turing machine that can determine whether a string is a valid sentence >>>> of your lanugage. The article does not even mention Turing machine.


    It is all the deep meaning of unify_with_occurs_check()
    that rejects an expression as semantically unsound
    because its evaluation is stuck in an infinite loop.


    The Liar Paradox formalized in the Prolog Programming language

    This sentence is not true.
    It is not true about what?
    It is not true about being not true.
    It is not true about being not true about what?
    It is not true about being not true about being not true.
    Oh I see you are stuck in a loop!

    This is formalized in the Prolog programming language
    ?- LP = not(true(LP)).
    LP = not(true(LP)).
    ?- unify_with_occurs_check(LP, not(true(LP))).
    false.

    Failing an occurs check seems to mean that the resolution of an
    expression remains stuck in an infinite loop. Just as the formalized
    Prolog determines that there is a cycle in the directed graph of the
    evaluation sequence of LP the simple English proves that the Liar
    Paradox never gets to the point. It has merely been semantically
    unsound all these years.

    All ot that is irrelevant to your lie about where the proof is.
    That you chose to lie about it and then to distract is a strong
    indication that you have no proof but prefer to lie otherwise.


    Formalized in Olcott's Minimal Type Theory
    LP := ~True(LP)    // LP {is defined as} ~True(LP)
    that expands to ~True(~True(~True(~True(~True(~True(...)))))) https://philarchive.org/archive/PETMTT-4v2

    The Liar Paradox fails because its specifies
    infinite recursion.

    Irrelevant to your lie about where the proof is. That you chose to lie
    about it and then to distract is a strong indication that you have no
    proof but prefer to lie otherwise.
    --
    Mikko
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Fri Nov 28 09:22:00 2025
    From Newsgroup: comp.ai.philosophy

    On 11/28/2025 2:45 AM, Mikko wrote:
    olcott kirjoitti 27.11.2025 klo 17.49:
    On 11/27/2025 2:20 AM, Mikko wrote:
    olcott kirjoitti 26.11.2025 klo 17.27:
    On 11/26/2025 4:30 AM, Mikko wrote:
    olcott kirjoitti 14.11.2025 klo 16.42:
    On 11/14/2025 3:01 AM, Mikko wrote:
    On 2025-11-13 16:00:58 +0000, olcott said:

    On 11/13/2025 3:05 AM, Mikko wrote:
    On 2025-11-12 14:45:34 +0000, olcott said:

    Rejecting expressions of formal language
    having pathological self-reference

    Explained how expressions with pathological self
    reference can simply be rejected as semantically/
    syntactically unsound thus preventing undefinability,
    and undecidability.

    This sentence is not true: "This sentence is not true"
    is true only because the inner sentence is semantically
    unsound. The inner sentence is formalized in Minimal
    Type Theory as LP := ~True(LP).
    (where A := B means A is defined as B).

    https://philpapers.org/rec/OLCREO

    Can someone review my actual reasoning
    elaborated in the paper?

    If you want to use the term "formal language" you must prove that >>>>>>>>> there is a Turing machine that can determine whether a string is a >>>>>>>>> valid sentence of your language. If no such Turing machine exists >>>>>>>>> you have no justifiction for the use of the word "formal".

    Been there done that and provided all the details.

    Where? At least not where the above pointer points to.


    In the paper that you failed to read.
    https://www.researchgate.net/
    publication/397533139_Rejecting_expressions_of_formal_language_having_pathological_self-reference

    In that article there is no proof that there is no proof that there >>>>> is a
    Turing machine that can determine whether a string is a valid sentence >>>>> of your lanugage. The article does not even mention Turing machine.


    It is all the deep meaning of unify_with_occurs_check()
    that rejects an expression as semantically unsound
    because its evaluation is stuck in an infinite loop.


    The Liar Paradox formalized in the Prolog Programming language

    This sentence is not true.
    It is not true about what?
    It is not true about being not true.
    It is not true about being not true about what?
    It is not true about being not true about being not true.
    Oh I see you are stuck in a loop!

    This is formalized in the Prolog programming language
    ?- LP = not(true(LP)).
    LP = not(true(LP)).
    ?- unify_with_occurs_check(LP, not(true(LP))).
    false.

    Failing an occurs check seems to mean that the resolution of an
    expression remains stuck in an infinite loop. Just as the formalized
    Prolog determines that there is a cycle in the directed graph of the
    evaluation sequence of LP the simple English proves that the Liar
    Paradox never gets to the point. It has merely been semantically
    unsound all these years.

    All ot that is irrelevant to your lie about where the proof is.
    That you chose to lie about it and then to distract is a strong
    indication that you have no proof but prefer to lie otherwise.


    Formalized in Olcott's Minimal Type Theory
    LP := ~True(LP)    // LP {is defined as} ~True(LP)
    that expands to ~True(~True(~True(~True(~True(~True(...))))))
    https://philarchive.org/archive/PETMTT-4v2

    The Liar Paradox fails because its specifies
    infinite recursion.

    Irrelevant to your lie about where the proof is. That you chose to lie
    about it and then to distract is a strong indication that you have no
    proof but prefer to lie otherwise.


    In other words you are simply ignoring this ~True(~True(~True(~True(~True(~True(...))))))
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.

    This required establishing a new foundation
    for correct reasoning.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Sat Nov 29 12:28:04 2025
    From Newsgroup: comp.ai.philosophy

    olcott kirjoitti 28.11.2025 klo 17.22:
    On 11/28/2025 2:45 AM, Mikko wrote:
    olcott kirjoitti 27.11.2025 klo 17.49:
    On 11/27/2025 2:20 AM, Mikko wrote:
    olcott kirjoitti 26.11.2025 klo 17.27:
    On 11/26/2025 4:30 AM, Mikko wrote:
    olcott kirjoitti 14.11.2025 klo 16.42:
    On 11/14/2025 3:01 AM, Mikko wrote:
    On 2025-11-13 16:00:58 +0000, olcott said:

    On 11/13/2025 3:05 AM, Mikko wrote:
    On 2025-11-12 14:45:34 +0000, olcott said:

    Rejecting expressions of formal language
    having pathological self-reference

    Explained how expressions with pathological self
    reference can simply be rejected as semantically/
    syntactically unsound thus preventing undefinability,
    and undecidability.

    This sentence is not true: "This sentence is not true"
    is true only because the inner sentence is semantically
    unsound. The inner sentence is formalized in Minimal
    Type Theory as LP := ~True(LP).
    (where A := B means A is defined as B).

    https://philpapers.org/rec/OLCREO

    Can someone review my actual reasoning
    elaborated in the paper?

    If you want to use the term "formal language" you must prove that >>>>>>>>>> there is a Turing machine that can determine whether a string >>>>>>>>>> is a
    valid sentence of your language. If no such Turing machine exists >>>>>>>>>> you have no justifiction for the use of the word "formal".

    Been there done that and provided all the details.

    Where? At least not where the above pointer points to.


    In the paper that you failed to read.
    https://www.researchgate.net/
    publication/397533139_Rejecting_expressions_of_formal_language_having_pathological_self-reference

    In that article there is no proof that there is no proof that
    there is a
    Turing machine that can determine whether a string is a valid
    sentence
    of your lanugage. The article does not even mention Turing machine. >>>>>>

    It is all the deep meaning of unify_with_occurs_check()
    that rejects an expression as semantically unsound
    because its evaluation is stuck in an infinite loop.


    The Liar Paradox formalized in the Prolog Programming language

    This sentence is not true.
    It is not true about what?
    It is not true about being not true.
    It is not true about being not true about what?
    It is not true about being not true about being not true.
    Oh I see you are stuck in a loop!

    This is formalized in the Prolog programming language
    ?- LP = not(true(LP)).
    LP = not(true(LP)).
    ?- unify_with_occurs_check(LP, not(true(LP))).
    false.

    Failing an occurs check seems to mean that the resolution of an
    expression remains stuck in an infinite loop. Just as the
    formalized Prolog determines that there is a cycle in the directed
    graph of the evaluation sequence of LP the simple English proves
    that the Liar Paradox never gets to the point. It has merely been
    semantically unsound all these years.

    All ot that is irrelevant to your lie about where the proof is.
    That you chose to lie about it and then to distract is a strong
    indication that you have no proof but prefer to lie otherwise.


    Formalized in Olcott's Minimal Type Theory
    LP := ~True(LP)    // LP {is defined as} ~True(LP)
    that expands to ~True(~True(~True(~True(~True(~True(...))))))
    https://philarchive.org/archive/PETMTT-4v2

    The Liar Paradox fails because its specifies
    infinite recursion.

    Irrelevant to your lie about where the proof is. That you chose to lie
    about it and then to distract is a strong indication that you have no
    proof but prefer to lie otherwise.

    In other words you are simply ignoring this ~True(~True(~True(~True(~True(~True(...))))))

    Your new lie is irrelevant to your earlier lie about where the proof is.
    --
    Mikko
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Mon Dec 1 14:45:27 2025
    From Newsgroup: comp.ai.philosophy

    On 26/11/2025 20:07, olcott wrote:
    On 11/26/2025 1:46 PM, Tristan Wibberley wrote:
    On 26/11/2025 15:27, olcott wrote:
    This is formalized in the Prolog programming language
    ?- LP = not(true(LP)).
    LP = not(true(LP)).
    ?- unify_with_occurs_check(LP, not(true(LP))).
    false.

    Why do you keep posting that?


    Until it is accepted or someone correctly shows
    how it does not once-and-for-all resolve
    the Liar Paradox I will keep posting it.

    Can you show how it /does/ ?
    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Mon Dec 1 09:18:03 2025
    From Newsgroup: comp.ai.philosophy

    On 12/1/2025 8:45 AM, Tristan Wibberley wrote:
    On 26/11/2025 20:07, olcott wrote:
    On 11/26/2025 1:46 PM, Tristan Wibberley wrote:
    On 26/11/2025 15:27, olcott wrote:
    This is formalized in the Prolog programming language
    ?- LP = not(true(LP)).
    LP = not(true(LP)).
    ?- unify_with_occurs_check(LP, not(true(LP))).
    false.

    Why do you keep posting that?


    Until it is accepted or someone correctly shows
    how it does not once-and-for-all resolve
    the Liar Paradox I will keep posting it.

    Can you show how it /does/ ?


    Been there done that many times no one cares.

    BEGIN:(Clocksin & Mellish 2003:254)
    Finally, a note about how Prolog matching sometimes differs from the unification used in Resolution. Most Prolog systems will allow you to
    satisfy goals like:

    equal(X, X).
    ?- equal(foo(Y), Y).

    that is, they will allow you to match a term against an uninstantiated
    subterm of itself. In this example, foo(Y) is matched against Y,
    which appears within it. As a result, Y will stand for foo(Y), which is foo(foo(Y)) (because of what Y stands for), which is foo(foo(foo(Y))),
    and so on. So Y ends up standing for some kind of infinite structure. END:(Clocksin & Mellish 2003:254)

    LP = not(true(LP)).
    not(true(not(true(not(true(not(true(LP)))))))).
    expands to not(true(not(true(not(true(not(true(...)))))))).
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.

    This required establishing a new foundation
    for correct reasoning.
    --- Synchronet 3.21a-Linux NewsLink 1.2