• People that have a very shallow understanding of these things --- AKAKaz

    From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Mon Nov 17 09:47:23 2025
    From Newsgroup: comp.ai.philosophy

    On 11/15/2025 8:48 PM, Kaz Kylheku wrote:
    On 2025-11-16, olcott <polcott333@gmail.com> wrote:
    HHH cannot possibly report on the behavior
    of its caller because HHH has no way of
    knowing what function is calling it.

    This means that when the halting problem
    requires HHH to report on the behavior of
    its caller: DD() that its is requiring
    something outside the scope of computation.

    That's dumber than the Witch scene in Monty Python and The Holy Grail.


    *I will be utterly relentless about this*
    *I will be utterly relentless about this*
    *I will be utterly relentless about this*

    Yes and now if you could just translate that
    mere baseless rhetoric into actual reasoning
    with a sound basis.

    Not to denigrate you but I think that this
    would be totally out of your depth as it
    would be for most everyone.

    The information that HHH is required to report
    on simply is not contained in its input.

    The information that HHH is required to report
    on simply is not contained in its input.

    The information that HHH is required to report
    on simply is not contained in its input.

    The information that HHH is required to report
    on simply is not contained in its input.

    The information that HHH is required to report
    on simply is not contained in its input.
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alan Mackenzie@acm@muc.de to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Mon Nov 17 16:32:51 2025
    From Newsgroup: comp.ai.philosophy

    [ Followup-To: set ]

    In comp.theory olcott <polcott333@gmail.com> wrote:
    On 11/15/2025 8:48 PM, Kaz Kylheku wrote:
    On 2025-11-16, olcott <polcott333@gmail.com> wrote:
    HHH cannot possibly report on the behavior
    of its caller because HHH has no way of
    knowing what function is calling it.

    This means that when the halting problem
    requires HHH to report on the behavior of
    its caller: DD() that its is requiring
    something outside the scope of computation.

    That's dumber than the Witch scene in Monty Python and The Holy Grail.

    *I will be utterly relentless about this*
    *I will be utterly relentless about this*
    *I will be utterly relentless about this*

    Utterly wilful and stupid would be more like it.

    Yes and now if you could just translate that
    mere baseless rhetoric into actual reasoning
    with a sound basis.

    Not to denigrate you but I think that this
    would be totally out of your depth as it
    would be for most everyone.

    Your thinking is out of kilter with reality.

    The information that HHH is required to report
    on simply is not contained in its input.

    Wrong. It is.

    The information that HHH is required to report
    on simply is not contained in its input.

    Wrong. It is.

    The information that HHH is required to report
    on simply is not contained in its input.

    Wrong. It is.

    The information that HHH is required to report
    on simply is not contained in its input.

    Wrong. It is.

    The information that HHH is required to report
    on simply is not contained in its input.

    Wrong. It is.

    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --
    Alan Mackenzie (Nuremberg, Germany).

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Mon Nov 17 10:48:55 2025
    From Newsgroup: comp.ai.philosophy

    On 11/17/2025 10:32 AM, Alan Mackenzie wrote:
    [ Followup-To: set ]

    In comp.theory olcott <polcott333@gmail.com> wrote:
    On 11/15/2025 8:48 PM, Kaz Kylheku wrote:
    On 2025-11-16, olcott <polcott333@gmail.com> wrote:
    HHH cannot possibly report on the behavior
    of its caller because HHH has no way of
    knowing what function is calling it.

    This means that when the halting problem
    requires HHH to report on the behavior of
    its caller: DD() that its is requiring
    something outside the scope of computation.

    That's dumber than the Witch scene in Monty Python and The Holy Grail.

    *I will be utterly relentless about this*
    *I will be utterly relentless about this*
    *I will be utterly relentless about this*

    Utterly wilful and stupid would be more like it.

    Yes and now if you could just translate that
    mere baseless rhetoric into actual reasoning
    with a sound basis.

    Not to denigrate you but I think that this
    would be totally out of your depth as it
    would be for most everyone.

    Your thinking is out of kilter with reality.


    Yet you cannot show that on the basis of reasoning
    so you try to dishonestly get way with mere baseless
    rhetoric.

    The information that HHH is required to report
    on simply is not contained in its input.

    Wrong. It is.


    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    int main()
    {
    HHH(DD);
    }

    If it was then DD simulated by HHH would derive
    the same sequence of steps as DD simulated by HHH1.

    HHH1 is identical to HHH except that DD does not
    call HHH1 at all and DD calls HHH(DD) in recursive
    simulation. That is the complete reason for the
    different behavior.

    That you flat out lie about this can be construed
    as the "reckless disregard for the truth" that
    loses libel cases. It cannot be construed as any
    rebuttal of this self-evident truth.

    In epistemology (theory of knowledge), a self-evident
    proposition is a proposition that is known to be true
    by understanding its meaning without proof... https://en.wikipedia.org/wiki/Self-evidence

    The information that HHH is required to report
    on simply is not contained in its input.

    Wrong. It is.

    The information that HHH is required to report
    on simply is not contained in its input.

    Wrong. It is.

    The information that HHH is required to report
    on simply is not contained in its input.

    Wrong. It is.

    The information that HHH is required to report
    on simply is not contained in its input.

    Wrong. It is.

    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.

    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Mon Nov 17 11:41:18 2025
    From Newsgroup: comp.ai.philosophy

    On 11/15/2025 8:48 PM, Kaz Kylheku wrote:
    On 2025-11-16, olcott <polcott333@gmail.com> wrote:
    HHH cannot possibly report on the behavior
    of its caller because HHH has no way of
    knowing what function is calling it.

    This means that when the halting problem
    requires HHH to report on the behavior of
    its caller: DD() that its is requiring
    something outside the scope of computation.

    That's dumber than the Witch scene in Monty Python and The Holy Grail.


    *I will be utterly relentless about this*
    *I will be utterly relentless about this*
    *I will be utterly relentless about this*
    *I will be utterly relentless about this*

    Yes and now if you could just translate that
    mere baseless rhetoric into actual reasoning
    with a sound basis.

    Not to denigrate you but I think that this
    would be totally out of your depth as it
    would be for most everyone.

    The information that HHH is required to report
    on simply is not contained in its input.

    The information that HHH is required to report
    on simply is not contained in its input.

    The information that HHH is required to report
    on simply is not contained in its input.

    The information that HHH is required to report
    on simply is not contained in its input.

    The information that HHH is required to report
    on simply is not contained in its input.
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alan Mackenzie@acm@muc.de to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Mon Nov 17 17:42:55 2025
    From Newsgroup: comp.ai.philosophy

    [ Followup-To: set ]

    In comp.theory olcott <polcott333@gmail.com> wrote:
    On 11/17/2025 10:32 AM, Alan Mackenzie wrote:
    [ Followup-To: set ]

    In comp.theory olcott <polcott333@gmail.com> wrote:
    On 11/15/2025 8:48 PM, Kaz Kylheku wrote:
    On 2025-11-16, olcott <polcott333@gmail.com> wrote:
    HHH cannot possibly report on the behavior
    of its caller because HHH has no way of
    knowing what function is calling it.

    This means that when the halting problem
    requires HHH to report on the behavior of
    its caller: DD() that its is requiring
    something outside the scope of computation.

    That's dumber than the Witch scene in Monty Python and The Holy Grail.

    *I will be utterly relentless about this*
    *I will be utterly relentless about this*
    *I will be utterly relentless about this*

    Utterly wilful and stupid would be more like it.

    Yes and now if you could just translate that
    mere baseless rhetoric into actual reasoning
    with a sound basis.

    Not to denigrate you but I think that this
    would be totally out of your depth as it
    would be for most everyone.

    Your thinking is out of kilter with reality.


    Yet you cannot show that on the basis of reasoning
    so you try to dishonestly get way with mere baseless
    rhetoric.

    I can, have done, as have many other posters here, and you just ignore reasoning.

    The information that HHH is required to report
    on simply is not contained in its input.

    Wrong. It is.

    [ .... ]

    If it was then DD simulated by HHH would derive
    the same sequence of steps as DD simulated by HHH1.

    There's nothing wrong with the input, as many others have explained to
    you many times. It is HHH and HHH1 which are defective. They are the
    same function and return different results? Haha!

    [ .... ]

    That you flat out lie about this can be construed
    as the "reckless disregard for the truth" that
    loses libel cases. It cannot be construed as any
    rebuttal of this self-evident truth.

    I never lie on Usenet, as I have said before several times. I care
    deeply about the truth and truthfulness. You lie continually in several
    ways (one of them, right here, is your construction of a falsehood as a "self-evident truth"). If you think I have committed libel, you are
    welcome to sue me in a German court. You would lose disastrously.

    [ .... ]

    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --
    Alan Mackenzie (Nuremberg, Germany).

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Mon Nov 17 11:48:47 2025
    From Newsgroup: comp.ai.philosophy

    On 11/17/2025 11:42 AM, Alan Mackenzie wrote:
    [ Followup-To: set ]

    In comp.theory olcott <polcott333@gmail.com> wrote:
    On 11/17/2025 10:32 AM, Alan Mackenzie wrote:
    [ Followup-To: set ]

    In comp.theory olcott <polcott333@gmail.com> wrote:
    On 11/15/2025 8:48 PM, Kaz Kylheku wrote:
    On 2025-11-16, olcott <polcott333@gmail.com> wrote:
    HHH cannot possibly report on the behavior
    of its caller because HHH has no way of
    knowing what function is calling it.

    This means that when the halting problem
    requires HHH to report on the behavior of
    its caller: DD() that its is requiring
    something outside the scope of computation.

    That's dumber than the Witch scene in Monty Python and The Holy Grail.

    *I will be utterly relentless about this*
    *I will be utterly relentless about this*
    *I will be utterly relentless about this*

    Utterly wilful and stupid would be more like it.

    Yes and now if you could just translate that
    mere baseless rhetoric into actual reasoning
    with a sound basis.

    Not to denigrate you but I think that this
    would be totally out of your depth as it
    would be for most everyone.

    Your thinking is out of kilter with reality.


    Yet you cannot show that on the basis of reasoning
    so you try to dishonestly get way with mere baseless
    rhetoric.

    I can, have done, as have many other posters here, and you just ignore reasoning.

    The information that HHH is required to report
    on simply is not contained in its input.

    Wrong. It is.

    [ .... ]

    If it was then DD simulated by HHH would derive
    the same sequence of steps as DD simulated by HHH1.

    There's nothing wrong with the input, as many others have explained to
    you many times. It is HHH and HHH1 which are defective. They are the
    same function and return different results? Haha!



    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    int main()
    {
    HHH(DD);
    }

    HHH simulates DD that calls HHH(DD)
    that simulates DD that calls HHH(DD)...

    HHH1 simulates DD that calls HHH(DD)
    that returns to DD that returns to HHH1.

    Until you show the correct execution traces proving
    that DD simulated by HHH is the same as DD simulated
    by HHH1 you are still showing a

    "reckless disregard for the truth".
    "reckless disregard for the truth".
    "reckless disregard for the truth".
    "reckless disregard for the truth".
    "reckless disregard for the truth".
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.theory,comp.ai.philosophy on Tue Nov 18 22:46:25 2025
    From Newsgroup: comp.ai.philosophy

    Hi

    Acyclic Ocelot. Who was your logic teacher, and
    how were you tought logic (in case you were
    tought logic at all).

    How would you do it differently now, how
    would you somebody teach logic? What is your
    logic teaching "theory", is it like

    learning to ride a bike? You need some helping
    hand that stabilizes your thought? Or can you
    just start and then land on your face

    flat on the asphalt?

    Bye


    olcott schrieb:
    On 11/15/2025 8:48 PM, Kaz Kylheku wrote:
    *I will be utterly relentless about this*
    *I will be utterly relentless about this*
    *I will be utterly relentless about this*
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy,comp.lang.prolog on Tue Nov 18 16:02:06 2025
    From Newsgroup: comp.ai.philosophy

    On 11/18/2025 3:46 PM, Mild Shock wrote:
    <big snip>

    *I remember you in the Prolog Group*
    *I remember you in the Prolog Group*
    *I remember you in the Prolog Group*

    The Liar Paradox formalized in the Prolog Programming language

    This sentence is not true.
    It is not true about what?
    It is not true about being not true.
    It is not true about being not true about what?
    It is not true about being not true about being not true.
    Oh I see you are stuck in a loop!


    This is formalized in the Prolog programming language
    ?- LP = not(true(LP)).
    LP = not(true(LP)).
    ?- unify_with_occurs_check(LP, not(true(LP))).
    false.

    Failing an occurs check seems to mean that the
    resolution of an expression remains stuck in
    an infinite loop. Just as the formalized Prolog
    determines that there is a cycle in the directed
    graph of the evaluation sequence of LP the simple
    English proves that the Liar Paradox never gets
    to the point. It has merely been semantically
    unsound all these years.
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.theory,comp.ai.philosophy,comp.lang.prolog on Tue Nov 18 23:15:06 2025
    From Newsgroup: comp.ai.philosophy

    Hi,

    So you say I was your logic teacher? I doubt
    so. Who was your logic teacher from the cradle
    to the appearance of the internet, when

    you still had to carry heavy paper books, while
    visiting the lake front in summer, looking for
    a shadowy tree, and the enjoying some logic?

    What books did you read ? What people did you know ?

    Bye

    olcott schrieb:
    On 11/18/2025 3:46 PM, Mild Shock wrote:
    <big snip>

    *I remember you in the Prolog Group*
    *I remember you in the Prolog Group*
    *I remember you in the Prolog Group*

    The Liar Paradox formalized in the Prolog Programming language

    This sentence is not true.
    It is not true about what?
    It is not true about being not true.
    It is not true about being not true about what?
    It is not true about being not true about being not true.
    Oh I see you are stuck in a loop!


    This is formalized in the Prolog programming language
    ?- LP = not(true(LP)).
    LP = not(true(LP)).
    ?- unify_with_occurs_check(LP, not(true(LP))).
    false.

    Failing an occurs check seems to mean that the
    resolution of an expression remains stuck in
    an infinite loop. Just as the formalized Prolog
    determines that there is a cycle in the directed
    graph of the evaluation sequence of LP the simple
    English proves that the Liar Paradox never gets
    to the point. It has merely been semantically
    unsound all these years.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy,comp.lang.prolog on Tue Nov 18 16:54:18 2025
    From Newsgroup: comp.ai.philosophy

    On 11/18/2025 4:15 PM, Mild Shock wrote:
    Hi,

    So you say I was your logic teacher? I doubt
    so. Who was your logic teacher from the cradle
    to the appearance of the internet, when

    you still had to carry heavy paper books, while
    visiting the lake front in summer, looking for
    a shadowy tree, and the enjoying some logic?

    What books did you read ? What people did you know ?

    Bye


    I Learned FOL from Wikipedia.

    I know PhD computer science professor Eric Hehner
    though many email conversations.

    This was the only book that I read on logic. https://www.amazon.com/Formal-Semantics-Cambridge-Textbooks-Linguistics/dp/0521376106

    I have been a software engineer since 1984.

    I am the creator of Google[Olcott's Minimal Type Theory]

    olcott schrieb:
    On 11/18/2025 3:46 PM, Mild Shock wrote:
    <big snip>

    *I remember you in the Prolog Group*
    *I remember you in the Prolog Group*
    *I remember you in the Prolog Group*

    The Liar Paradox formalized in the Prolog Programming language

    This sentence is not true.
    It is not true about what?
    It is not true about being not true.
    It is not true about being not true about what?
    It is not true about being not true about being not true.
    Oh I see you are stuck in a loop!


    This is formalized in the Prolog programming language
    ?- LP = not(true(LP)).
    LP = not(true(LP)).
    ?- unify_with_occurs_check(LP, not(true(LP))).
    false.

    Failing an occurs check seems to mean that the
    resolution of an expression remains stuck in
    an infinite loop. Just as the formalized Prolog
    determines that there is a cycle in the directed
    graph of the evaluation sequence of LP the simple
    English proves that the Liar Paradox never gets
    to the point. It has merely been semantically
    unsound all these years.


    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory,comp.ai.philosophy,comp.lang.prolog on Tue Nov 18 15:15:30 2025
    From Newsgroup: comp.ai.philosophy

    On 11/18/2025 2:02 PM, olcott wrote:
    On 11/18/2025 3:46 PM, Mild Shock wrote:
    <big snip>

    *I remember you in the Prolog Group*
    *I remember you in the Prolog Group*
    *I remember you in the Prolog Group*

    The Liar Paradox formalized in the Prolog Programming language

    This sentence is not true.
    It is not true about what?
    It is not true about being not true.
    It is not true about being not true about what?
    It is not true about being not true about being not true.
    Oh I see you are stuck in a loop!
    [...]

    DD says, I can halt, or not halt... That is 100% true about DD. So, lets explore both paths, and fin the sim when they are _both_ hit. Actually,
    the following models your DD:
    ____________________________
    1 HOME
    5 PRINT "ct_dr_fuzz lol. ;^)"
    6 P0 = 0
    7 P1 = 0

    10 REM Fuzzer... ;^)
    20 A$ = "NOPE!"
    30 IF RND(1) < .5 THEN A$ = "YES"

    100 REM INPUT "Shall DD halt or not? " ; A$
    110 PRINT "Shall DD halt or not? " ; A$
    200 IF A$ = "YES" GOTO 666
    300 P0 = P0 + 1
    400 IF P0 > 0 AND P1 > 0 GOTO 1000
    500 GOTO 10

    666 PRINT "OK!"
    667 P1 = P1 + 1
    700 PRINT "NON_HALT P0 = "; P0
    710 PRINT "HALT P1 = "; P1
    720 IF P0 > 0 AND P1 > 0 GOTO 1000
    730 PRINT "ALL PATHS FAILED TO BE HIT!"
    740 GOTO 10


    1000
    1010 PRINT "FIN... All paths hit."
    1020 PRINT "NON_HALT P0 = "; P0
    1030 PRINT "HALT P1 = "; P1
    ____________________________

    Fair enough?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.theory,comp.ai.philosophy,comp.lang.prolog on Wed Nov 19 09:50:41 2025
    From Newsgroup: comp.ai.philosophy

    Hi,

    Wikipedia only exists since 2001. How did people
    learn Logic before the new millenium? Seems you
    have been alive before 2001 already,

    when you are a software engineer since 1984. No
    logic for Acyclic Ozelot before 2001. Did really
    only bring Wikipedia, a secondary reference,

    logic to you. No primary sources of logic?

    Bye

    olcott schrieb:

    I Learned FOL from Wikipedia.
    I have been a software engineer since 1984.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.theory,comp.ai.philosophy,comp.lang.prolog on Wed Nov 19 10:16:31 2025
    From Newsgroup: comp.ai.philosophy

    Hi,

    How it started, DeepSeek:

    me: What are top ten books in set theory?
    ai: bla bla
    ai: Classic Set Theory: For Guided Independent Study by Derek C. Goldrei

    How its going, ChatGPT:

    me: What are top ten books in set theory?
    ai: bla bla
    ai: The Incomparable Axioms — Koellner (more philosophical, modern)

    me: Nice try, I don't find "The Incomparable Axioms —
    Koellner", you halucinated that

    ai: You’re right — I made a mistake. I hallucinated a
    book title. Sorry about that.

    ai: Peter Koellner has written influential papers and
    a thesis/lecture notes, but there is no book titled
    The Incomparable Axioms by Koellner that I can find.

    The Search for New Axioms https://dspace.mit.edu/bitstream/handle/1721.1/7989/53014647-MIT.pdf

    LoL

    Bye

    Mild Shock schrieb:
    Hi,

    Wikipedia only exists since 2001. How did people
    learn Logic before the new millenium? Seems you
    have been alive before 2001 already,

    when you are a software engineer since 1984. No
    logic for Acyclic Ozelot before 2001. Did really
    only bring Wikipedia, a secondary reference,

    logic to you. No primary sources of logic?

    Bye

    olcott schrieb:

    I Learned FOL from Wikipedia.
    I have been a software engineer since 1984.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.theory,comp.ai.philosophy,comp.lang.prolog on Wed Nov 19 11:14:48 2025
    From Newsgroup: comp.ai.philosophy

    Hi,

    Is there a Slim Fermats Last Theorem (FLT) but
    only for Lean4? There is a new proof:

    We formalize a complete proof of the regular
    case of Fermat's Last Theorem in the Lean4
    theorem prover. Our formalization includes a
    proof of Kummer's lemma, that is the main
    obstruction to Fermat's Last Theorem for
    regular primes. Rather than following the
    modern proof of Kummer's lemma via class
    field theory, we prove it by using Hilbert's
    Theorems 90-94 in a way that is more
    amenable to formalization.
    https://arxiv.org/abs/2410.01466v3

    Is this also available for Rocq or Isabelle/HOL.
    In as far I feel with ChatGPTs invention of
    a set theory book:

    ai: The Incomparable Axioms — Koellner
    (more philosophical, modern)

    If we have to axiom systems A and B, it is
    often easy to invoke proof theory and then
    show A ⊆ B or B ⊆ A. Trouble might start if
    we want to show A ⊈ B and B ⊈ A,

    this traditional fell into the category of
    model theory, but modern proof assistants might
    be better off maybe. Such a proof could be
    a pebble game, as in EF games,

    or even some things that go beyond EF games.
    Now for the question whether Rocq or Isabelle/HOL
    has also a proof, the comparability of Axioms
    is somehow aggravated, when different proof

    systems have different foundations. What
    we then need to compare is F+A with G+B,
    where F and G are the varying foundations.
    Who said that logic is easy and beautiful?

    Bye

    Mild Shock schrieb:
    Hi,

    How it started, DeepSeek:

    me: What are top ten books in set theory?
    ai: bla bla
    ai: Classic Set Theory: For Guided Independent Study by Derek C. Goldrei

    How its going, ChatGPT:

    me: What are top ten books in set theory?
    ai: bla bla
    ai: The Incomparable Axioms — Koellner (more philosophical, modern)

    me: Nice try, I don't find "The Incomparable Axioms —
    Koellner", you halucinated that

    ai: You’re right — I made a mistake. I hallucinated a
    book title. Sorry about that.

    ai: Peter Koellner has written influential papers and
    a thesis/lecture notes, but there is no book titled
    The Incomparable Axioms by Koellner that I can find.

    The Search for New Axioms https://dspace.mit.edu/bitstream/handle/1721.1/7989/53014647-MIT.pdf

    LoL

    Bye

    Mild Shock schrieb:
    Hi,

    Wikipedia only exists since 2001. How did people
    learn Logic before the new millenium? Seems you
    have been alive before 2001 already,

    when you are a software engineer since 1984. No
    logic for Acyclic Ozelot before 2001. Did really
    only bring Wikipedia, a secondary reference,

    logic to you. No primary sources of logic?

    Bye

    olcott schrieb:

    I Learned FOL from Wikipedia.
    I have been a software engineer since 1984.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 16:35:44 2025
    From Newsgroup: comp.ai.philosophy

    On 17/11/2025 22:58, Kaz Kylheku wrote:
    - continuing to use impure functions (e.g. mutating global
    execution trace buffer; distinguishing "Root == 1" H
    functions from "Root == 0").


    Woah there fella! The halting problem is about state machines, not pure functions.

    Which is not to say there aren't similarly devastating issues with the
    other two points.

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 11:23:45 2025
    From Newsgroup: comp.ai.philosophy

    On 11/17/2025 4:58 PM, Kaz Kylheku wrote:
    On 2025-11-17, olcott <polcott333@gmail.com> wrote:
    On 11/15/2025 8:48 PM, Kaz Kylheku wrote:
    On 2025-11-16, olcott <polcott333@gmail.com> wrote:
    HHH cannot possibly report on the behavior
    of its caller because HHH has no way of
    knowing what function is calling it.

    This means that when the halting problem
    requires HHH to report on the behavior of
    its caller: DD() that its is requiring
    something outside the scope of computation.

    That's dumber than the Witch scene in Monty Python and The Holy Grail.


    *I will be utterly relentless about this*
    *I will be utterly relentless about this*
    *I will be utterly relentless about this*
    *I will be utterly relentless about this*

    Yes and now if you could just translate that
    mere baseless rhetoric into actual reasoning
    with a sound basis.

    Not to denigrate you but I think that this
    would be totally out of your depth as it
    would be for most everyone.

    I am certainly not smarter than Turing, but you think you are.

    I do not believe that HHH is required to report on the behavior
    of its caller. There is no such thing.


    Original Linz Turing Machine H applied to ⟨Ĥ⟩
    H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qy // accept state
    H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qn // reject state

    Even Linz requires Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ to report
    on the behavior it its caller.

    *From the bottom of page 319 has been adapted to this* https://www.liarparadox.org/Peter_Linz_HP_317-320.pdf

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞, // accept state
    if Ĥ applied to ⟨Ĥ⟩ halts, and

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn // reject state
    if Ĥ applied to ⟨Ĥ⟩ does not halt

    Ĥ.embedded_H cannot tell that it is not H

    Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn // HHH(DD)==0
    H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy // HHH1(DD)==1
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 17:48:26 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-11-19, Tristan Wibberley <tristan.wibberley+netnews2@alumni.manchester.ac.uk> wrote:
    On 17/11/2025 22:58, Kaz Kylheku wrote:
    - continuing to use impure functions (e.g. mutating global
    execution trace buffer; distinguishing "Root == 1" H
    functions from "Root == 0").

    Woah there fella! The halting problem is about state machines, not pure functions.

    Recursive functions and Turing machines are equivalent. The halting
    problem is about recursive functions too.

    In any case, topics in the halting problem cannot be properly explored
    using impure procedures --- not in such a way that we assume that those procedures directly correspond to recursive functions.

    We can't have a procedure H whch has two behaviors based on whether it
    is the first invocation or subsequent, and talk about that procedure as
    a single recursive function H.

    We may be able to model that function H as another function H' that
    takes additional arguments representing all the external state that H
    depends on. It becomes H(P, Root) instead of H(P). Then it is obvious
    that H(D, True) and H(D, False) are different deciders. We have to
    explicitly pick which one of those two the diagonal case D invokes.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 17:57:49 2025
    From Newsgroup: comp.ai.philosophy

    On 19/11/2025 17:48, Kaz Kylheku wrote:

    Recursive functions and Turing machines are equivalent. The halting
    problem is about recursive functions too.

    There exists an equivalent ...


    In any case, topics in the halting problem cannot be properly explored
    using impure procedures --- not in such a way that we assume that those procedures directly correspond to recursive functions.

    No. The Halting Theorem has no problems demonstrable with leaky
    simulation (emulation) sandboxes.

    Topics can be explored with leaky sandboxes, topics such as "How can
    leaky sandboxes and their effects be characterised?" and "What are the relationships between various recursive functions and various Turing
    Machines and their generalisations?"


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 12:39:51 2025
    From Newsgroup: comp.ai.philosophy

    On 11/19/2025 11:57 AM, Tristan Wibberley wrote:
    On 19/11/2025 17:48, Kaz Kylheku wrote:

    Recursive functions and Turing machines are equivalent. The halting
    problem is about recursive functions too.

    There exists an equivalent ...


    In any case, topics in the halting problem cannot be properly explored
    using impure procedures --- not in such a way that we assume that those
    procedures directly correspond to recursive functions.

    No. The Halting Theorem has no problems demonstrable with leaky
    simulation (emulation) sandboxes.

    Topics can be explored with leaky sandboxes, topics such as "How can
    leaky sandboxes and their effects be characterised?" and "What are the relationships between various recursive functions and various Turing
    Machines and their generalisations?"



    ChatGPT agrees yet in this case I cannot independently
    verify that it is correct.

    https://chatgpt.com/share/691e0ea1-423c-8011-b3ad-20e2371d9496

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2