• Re: People that have a very shallow understanding of these things

    From olcott@polcott333@gmail.com to sci.logic,comp.theory,sci.math,comp.ai.philosophy on Sat Nov 15 23:13:11 2025
    From Newsgroup: comp.ai.philosophy

    On 11/15/2025 10:09 PM, wij wrote:
    On Sat, 2025-11-15 at 21:01 -0600, olcott wrote:
    On 11/15/2025 8:55 PM, olcott wrote:
    On 11/15/2025 8:48 PM, Kaz Kylheku wrote:
    On 2025-11-16, olcott <polcott333@gmail.com> wrote:
    HHH cannot possibly report on the behavior
    of its caller because HHH has no way of
    knowing what function is calling it.

    This means that when the halting problem
    requires HHH to report on the behavior of
    its caller: DD() that its is requiring
    something outside the scope of computation.

    That's dumber than the Witch scene in Monty Python and The Holy Grail. >>>>

    Yes and now if you could just translate that
    mere baseless rhetoric into actual reasoning
    with a sound basis.

    Not to denigrate you but I think that this
    would be totally out of your depth as it
    would be for most everyone.

    The information that HHH is required to report
    on simply is not contained in its input.

    The information that HHH is required to report
    on simply is not contained in its input.

    The information that HHH is required to report
    on simply is not contained in its input.


    People that have a very shallow understanding of these
    things would say that is what undecidable means.

    It is you who don't even understand the logical 'if',
    It it ridiculously stupid of you to say that
    I do not understand logical if.

    Google [Olcott's Minimal Type Theory]
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to sci.logic,comp.theory,sci.math,comp.ai.philosophy on Sun Nov 16 19:56:05 2025
    From Newsgroup: comp.ai.philosophy

    On 16/11/2025 05:13, olcott wrote:
    On 11/15/2025 10:09 PM, wij wrote:
    It is you who don't even understand the logical 'if',
    It it ridiculously stupid of you to say that
    I do not understand logical if.


    To what does the phrase "logical if" refer?


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to sci.logic,comp.theory,sci.math,comp.ai.philosophy on Sun Nov 16 19:02:37 2025
    From Newsgroup: comp.ai.philosophy

    On 11/16/2025 1:56 PM, Tristan Wibberley wrote:
    On 16/11/2025 05:13, olcott wrote:
    On 11/15/2025 10:09 PM, wij wrote:
    It is you who don't even understand the logical 'if',
    It it ridiculously stupid of you to say that
    I do not understand logical if.


    To what does the phrase "logical if" refer?



    p → q is the classic if p then q

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to sci.logic,comp.theory,sci.math,comp.ai.philosophy on Mon Nov 17 02:21:30 2025
    From Newsgroup: comp.ai.philosophy

    On 17/11/2025 01:02, olcott wrote:
    On 11/16/2025 1:56 PM, Tristan Wibberley wrote:
    On 16/11/2025 05:13, olcott wrote:
    On 11/15/2025 10:09 PM, wij wrote:
    It is you who don't even understand the logical 'if',
    It it ridiculously stupid of you to say that
    I do not understand logical if.


    To what does the phrase "logical if" refer?



    p → q is the classic if p then q

    No. It's "there exists a construction that converts a proof of p to a
    proof of q". That's not logical if. "if" is just a thing teachers say to
    mess you up as an alternative to teaching.

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to sci.logic,comp.theory,sci.math,comp.ai.philosophy on Sun Nov 16 18:35:22 2025
    From Newsgroup: comp.ai.philosophy

    On 11/16/2025 5:02 PM, olcott wrote:
    On 11/16/2025 1:56 PM, Tristan Wibberley wrote:
    On 16/11/2025 05:13, olcott wrote:
    On 11/15/2025 10:09 PM, wij wrote:
    It is you who don't even understand the logical 'if',
    It it ridiculously stupid of you to say that
    I do not understand logical if.


    To what does the phrase "logical if" refer?



    p → q is the classic if p then q

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.




    Cats are animals, fine. We can see that you are a moron?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to sci.logic,comp.theory,sci.math,comp.ai.philosophy on Sun Nov 16 21:47:58 2025
    From Newsgroup: comp.ai.philosophy

    On 11/16/2025 8:21 PM, Tristan Wibberley wrote:
    On 17/11/2025 01:02, olcott wrote:
    On 11/16/2025 1:56 PM, Tristan Wibberley wrote:
    On 16/11/2025 05:13, olcott wrote:
    On 11/15/2025 10:09 PM, wij wrote:
    It is you who don't even understand the logical 'if',
    It it ridiculously stupid of you to say that
    I do not understand logical if.


    To what does the phrase "logical if" refer?



    p → q is the classic if p then q

    No.

    YES. That is as a matter of basic fact the classic
    logical if of first order logic.

    There are thousands of other ways that "logical if"
    could be interpreted.

    It's "there exists a construction that converts a proof of p to a
    proof of q". That's not logical if. "if" is just a thing teachers say to
    mess you up as an alternative to teaching.

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to sci.logic,comp.theory,sci.math,comp.ai.philosophy on Sun Nov 16 21:49:02 2025
    From Newsgroup: comp.ai.philosophy

    On 11/16/2025 8:21 PM, Tristan Wibberley wrote:
    On 17/11/2025 01:02, olcott wrote:
    On 11/16/2025 1:56 PM, Tristan Wibberley wrote:
    On 16/11/2025 05:13, olcott wrote:
    On 11/15/2025 10:09 PM, wij wrote:
    It is you who don't even understand the logical 'if',
    It it ridiculously stupid of you to say that
    I do not understand logical if.


    To what does the phrase "logical if" refer?



    p → q is the classic if p then q

    No.

    Yes: https://math.hawaii.edu/~ramsey/Logic/IfThen.html

    It's "there exists a construction that converts a proof of p to a
    proof of q". That's not logical if. "if" is just a thing teachers say to
    mess you up as an alternative to teaching.

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Mon Nov 17 12:14:31 2025
    From Newsgroup: comp.ai.philosophy

    On 11/17/2025 12:06 PM, Kaz Kylheku wrote:
    On 2025-11-17, olcott <polcott333@gmail.com> wrote:
    Yes and now if you could just translate that
    mere baseless rhetoric into actual reasoning
    with a sound basis.

    Remember, you reject actual reasoning with a sound
    basis as learned-by-rote conventional wisdom,
    which is closed-minded.

    Pearls before the swine and all that.


    When it is repeatedly ignored that conventional
    wisdom does not begin with a sound basis and it
    is utterly insisted that this basis never be
    examined then we have

    "reckless disregard for the truth".
    "reckless disregard for the truth".
    "reckless disregard for the truth".
    "reckless disregard for the truth".
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Mon Nov 17 12:22:03 2025
    From Newsgroup: comp.ai.philosophy

    On 11/17/2025 12:16 PM, Kaz Kylheku wrote:
    On 2025-11-17, Kaz Kylheku <643-408-1753@kylheku.com> wrote:
    On 2025-11-17, olcott <polcott333@gmail.com> wrote:
    Yes and now if you could just translate that
    mere baseless rhetoric into actual reasoning
    with a sound basis.

    Remember, you reject actual reasoning with a sound
    basis as learned-by-rote conventional wisdom,
    which is closed-minded.

    I mean, you've already rejected the actual with-a-sound-basis reasoning
    of Turing, Gödel, ...

    What is anyone here going to do where those two failed?

    What you are asking for is some other kind of unspecified mode
    of reasoning.

    Your measure for whether that reasoning is happening is the degree to
    which someone regurgitates your ideas without a shred of justification
    (i.e. exactly as they were received).

    Olcott Lights Out: 14 days ...


    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    int main()
    {
    HHH(DD);
    }

    HHH simulates DD that calls HHH(DD)
    that simulates DD that calls HHH(DD)...

    HHH1 simulates DD that calls HHH(DD)
    that returns to DD that returns to HHH1.

    Until you show the correct execution traces proving
    that DD simulated by HHH is the same as DD simulated
    by HHH1 you are still showing a

    "reckless disregard for the truth".
    "reckless disregard for the truth".
    "reckless disregard for the truth".
    "reckless disregard for the truth".
    "reckless disregard for the truth".

    I will keep calling you out on this every
    day even if this is years past your last
    reply (Unless I get published in the mean time).
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Mon Nov 17 13:11:09 2025
    From Newsgroup: comp.ai.philosophy

    On 11/17/2025 1:02 PM, Tristan Wibberley wrote:
    On 17/11/2025 18:16, Kaz Kylheku wrote:
    On 2025-11-17, Kaz Kylheku <643-408-1753@kylheku.com> wrote:
    On 2025-11-17, olcott <polcott333@gmail.com> wrote:
    Yes and now if you could just translate that
    mere baseless rhetoric into actual reasoning
    with a sound basis.

    Remember, you reject actual reasoning with a sound
    basis as learned-by-rote conventional wisdom,
    which is closed-minded.

    I mean, you've already rejected the actual with-a-sound-basis reasoning
    of Turing, Gödel, ...


    What is the reasoning that concludes that the reasonings of Turing and
    of Goedel have a sound basis and does /that/ reasoning also have a sound basis?


    This is exactly the kind of deep insight that
    they totally lack. What they go by is: textbooks
    say it and textbooks are inherently infallible.

    They might not even understand what the term:
    "sound basis" actually means.
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Mon Nov 17 13:42:26 2025
    From Newsgroup: comp.ai.philosophy

    On 11/17/2025 1:02 PM, Tristan Wibberley wrote:
    On 17/11/2025 18:16, Kaz Kylheku wrote:
    On 2025-11-17, Kaz Kylheku <643-408-1753@kylheku.com> wrote:
    On 2025-11-17, olcott <polcott333@gmail.com> wrote:
    Yes and now if you could just translate that
    mere baseless rhetoric into actual reasoning
    with a sound basis.

    Remember, you reject actual reasoning with a sound
    basis as learned-by-rote conventional wisdom,
    which is closed-minded.

    I mean, you've already rejected the actual with-a-sound-basis reasoning
    of Turing, Gödel, ...


    What is the reasoning that concludes that the reasonings of Turing and
    of Goedel have a sound basis and does /that/ reasoning also have a sound basis?

    Or is it merely an axiom that their reasoning has a sound basis, like Olcott's definitional proposition turned out to be?


    All truth that is anchored in semantic meaning
    has stipulated definitions as its ultimate basis.

    The behavior of C programs only has the stipulated
    definition of the C programming language as their
    ultimate basis.

    All Turing machine deciders only compute the
    mapping from their finite string inputs to an
    accept state or reject state on the basis that
    this input finite string specifies a semantic
    or syntactic property.

    The very difficult part of my proof is that the
    input to HHH(DD) does not specify the behavior
    of DD executed from main when the measure of
    the behavior of this input is DD simulated by HHH.

    *From the bottom of page 319 has been adapted to this* https://www.liarparadox.org/Peter_Linz_HP_317-320.pdf

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞, // accept state
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn // reject state

    *Keep repeating unless aborted*
    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩

    Likewise the input to Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ as measured
    by ⟨Ĥ⟩ ⟨Ĥ⟩ simulated by Ĥ.embedded_H is not the behavior
    of Ĥ applied to ⟨Ĥ⟩.

    This problem never ends and the beat goes on, and the beat goes on...


    It ends as soon as enough people with the capacity
    to understand attain complete understanding.

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Mon Nov 17 22:47:25 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-11-17, olcott <polcott333@gmail.com> wrote:
    On 11/17/2025 12:06 PM, Kaz Kylheku wrote:
    On 2025-11-17, olcott <polcott333@gmail.com> wrote:
    Yes and now if you could just translate that
    mere baseless rhetoric into actual reasoning
    with a sound basis.

    Remember, you reject actual reasoning with a sound
    basis as learned-by-rote conventional wisdom,
    which is closed-minded.

    Pearls before the swine and all that.


    When it is repeatedly ignored that conventional
    wisdom does not begin with a sound basis and it
    is utterly insisted that this basis never be
    examined then we have

    "reckless disregard for the truth".

    You are saying that Turing (et al) harbored reckless disregard for the
    truth, and in a way that generations after him were not able to see.

    Yet you do.

    Ergo, you are smarter than Turing, and generations of math and CS
    academics after him.

    So, what do you want from ordinary guys in a newsgroup.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Mon Nov 17 22:49:01 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-11-17, olcott <polcott333@gmail.com> wrote:
    Until you show the correct execution traces proving
    that DD simulated by HHH is the same as DD simulated
    by HHH1 you are still showing a
    "reckless disregard for the truth".

    I have showed execution traces, but you would never admit that
    any execution trace which disagrees with what you are saying is
    "correct".

    The definition of "correct" is "agrees with what you are saying".
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Mon Nov 17 19:52:05 2025
    From Newsgroup: comp.ai.philosophy

    On 11/17/2025 5:49 PM, Kaz Kylheku wrote:
    On 2025-11-17, olcott <polcott333@gmail.com> wrote:
    On 11/17/2025 4:24 PM, Tristan Wibberley wrote:
    On 17/11/2025 21:28, Kaz Kylheku wrote:
    Turing didn't just say, "believe me when I say that halting is
    undecidable, because I have given it years of thought, and
    cannot see it any other way --- and I am smarter than all of you".


    Olcott /has/ been told to stop enquiring on that basis which is when I
    interjected about doctrine.


    Make sure that you do not examine the foundational
    assumptions of computation because almost no one
    here even knows what the term [foundational assumptions]
    even means and this is too embarrassing for them.

    You do, of course! Why the foundational assumptions of computation
    are things like EAX and EIP registers of the 32 bit x86 processor.

    You've noted in the past that x86 is over the heads of almost
    everyone in CS academia.

    That's also why they are wrong about halting.

    They just can't follow simple machine language which knocks
    it all down.


    *The nuances of this one is the one being ignored*
    *The nuances of this one is the one being ignored*
    *The nuances of this one is the one being ignored*

    Turing machine deciders only compute a mapping from
    their [finite string] inputs to an accept or reject
    state on the basis that this [finite string] input
    specifies or fails to specify a semantic or syntactic
    property.

    *From the bottom of page 319 has been adapted to this* https://www.liarparadox.org/Peter_Linz_HP_317-320.pdf

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞, // accept state
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn // reject state

    *Keep repeating unless aborted*
    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to sci.logic,comp.theory,sci.math,comp.ai.philosophy on Mon Nov 17 04:01:49 2025
    From Newsgroup: comp.ai.philosophy

    On 17/11/2025 03:49, olcott wrote:
    On 11/16/2025 8:21 PM, Tristan Wibberley wrote:
    On 17/11/2025 01:02, olcott wrote:
    On 11/16/2025 1:56 PM, Tristan Wibberley wrote:
    On 16/11/2025 05:13, olcott wrote:
    On 11/15/2025 10:09 PM, wij wrote:
    It is you who don't even understand the logical 'if',
    It it ridiculously stupid of you to say that
    I do not understand logical if.


    To what does the phrase "logical if" refer?



    p → q is the classic if p then q

    No.

    Yes: https://math.hawaii.edu/~ramsey/Logic/IfThen.html


    Natural If ... Then is closer to an LJ sequent => in intuitionistic
    formalisms and that's definitely different from → in the face of, for example, axiom extensions of a system.

    If ... Then as described at the top of the resource you referenced is real-world effect and that's another different thing. There are many
    basic demonstrations of how that is not →.


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2