• Re: polcott agrees with the halting problem

    From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Mon Nov 17 18:31:50 2025
    From Newsgroup: comp.ai.philosophy

    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string
    describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Mon Nov 17 19:43:20 2025
    From Newsgroup: comp.ai.philosophy

    On 11/17/2025 7:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string
    describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.


    No, it's more like this:

    Compute the product of X and Y but only using the single step of X + Y.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Mon Nov 17 18:46:26 2025
    From Newsgroup: comp.ai.philosophy

    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string
    describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to compute whether the machine described halts

    the only difference between ur claim here and the proofs is the why
    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Tue Nov 18 03:07:06 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string
    describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to compute whether the machine described halts

    But that isn't true; you certainly can do that. Just not using one
    unified algorithm that works for absolutely all such strings.

    When it /does/ work, it's certainly not based on any input other than
    the string.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Mon Nov 17 19:10:25 2025
    From Newsgroup: comp.ai.philosophy

    On 11/17/25 7:07 PM, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string
    describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to compute
    whether the machine described halts

    But that isn't true; you certainly can do that. Just not using one
    unified algorithm that works for absolutely all such strings.

    When it /does/ work, it's certainly not based on any input other than
    the string.

    yes i meant generally

    you also can't compute generally whether you can or cannot compute
    whether a an machine description halts or not
    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Mon Nov 17 19:36:09 2025
    From Newsgroup: comp.ai.philosophy

    On 11/17/2025 7:10 PM, dart200 wrote:
    On 11/17/25 7:07 PM, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string
    describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to compute >>> whether the machine described halts

    But that isn't true; you certainly can do that. Just not using one
    unified algorithm that works for absolutely all such strings.

    When it /does/ work, it's certainly not based on any input other than
    the string.

    yes i meant generally

    you also can't compute generally whether you can or cannot compute
    whether a an machine description halts or not


    Didn't you suggest you have a solution to the halting problem using reflection?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Mon Nov 17 21:37:49 2025
    From Newsgroup: comp.ai.philosophy

    On 11/17/2025 8:46 PM, dart200 wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string
    describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:


    No I am directly showing the key error of the
    halting problem. It is very very difficult to
    to understand. I have been working on this since
    2004 and I just understood the error this year.

    one cannot take the string describing the machine, and use it to compute whether the machine described halts


    The input to HHH(DD) specifies a different sequence
    of steps than the input to HHH1(DD).

    HHH simulates DD that calls HHH(DD)
    that simulates DD that calls HHH(DD)...

    HHH1 simulates DD that calls HHH(DD)
    that returns to DD that returns to HHH1.

    The sound basis of this reasoning is the
    semantics of the C programming language.

    typedef int (*ptr)();
    int HHH(ptr P);
    int HHH1(ptr P);

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    int main()
    {
    HHH(DD);
    }



    the only difference between ur claim here and the proofs is the why

    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Tue Nov 18 03:45:16 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 7:07 PM, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string
    describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to compute >>> whether the machine described halts

    But that isn't true; you certainly can do that. Just not using one
    unified algorithm that works for absolutely all such strings.

    When it /does/ work, it's certainly not based on any input other than
    the string.

    yes i meant generally

    But polocott means something else. He keeps insisting (without any
    rational justification) that the conventional halting problem,
    when "H" is presented with the diagonal "D" case, is asking
    "H" to decide something which is not the finite string input.

    He believes that D literally calls the same instance of H in
    the same program image, which is the only way it can be,
    and thus D is H's caller. And thus H is being asked to decide
    about its caller. But the caller is not the parameter D, but
    an activated procedure. Therefore H is being asked to decide something
    about an activated procedure and not its finite string parameter.

    The "reasoning" if it can be called that, is completely
    disconnected from rationality; it's eaxctly like the witch
    scene in Monty Python and The Holy Grail.

    Witches burn, and wood also burns proving that witches are made of wood;
    wood floats; a duck also floats so it must be made of wood; so if the
    woman weighs as much as a duck, she must be witch.

    This is computer science according to olcott:

    1. The standard halting problem stupidly forgets to restrict
    decider inputs to finite machine restrictions, sometimes
    requiring them to decide on their callers.

    2. D calls H, and so D is H's caller.

    3. A caller cannot be an input.

    4. But H clearly does have an input D in the expression H(D)
    and D is its caller.

    5. Since the caller cannot be an input, there must be two D's:
    the caller D and the input D.

    6. It is the caller D that is nonterminating, and the Halting Problem is
    wrongly asking about that one, rather than the input.

    7. The input D is nonterminating. (Proof: when H simulates it,
    it gets into some kind of recursive tizzy that Olcott poorly
    understands. Anyway, because of that H is correct to call its
    input nonterminating and return 0.)

    8. Deciders other than H can report 1 because D is not /their/ caller,
    and so to them, the caller D and input D are the same.
    (Proof: when olcott makes a an exact copy of H under the name H1,
    it is found that H1(D) returns 1. The only difference is that
    D calls H and not H1: D is not H1's caller, and so H1 decides the
    terminating D as required by the halting problem.)

    Problem is:

    In (1) the halting problem does not forget to restrict decider
    inputs to finite machine descriptions.

    In (7) the recursion detecting conditions olcott came up with
    and tested in the x86utm/Halt7 are bogus. They actually detect
    the emergence of simulation tower, plus have some other issues
    due to cheating with static, mutable state.

    In (8), the business with H1(D) and H(D) returning a different value has
    to do with invalid comparison of functions. H1 and H want to be the same function according to the math, but the abort test uses address
    equivalance to conclude they are not the same function.
    That test then /makes/ them be different functions.
    But because they have the same body, that speaks something to Olcott,
    through is massive confirmation bias; he takes it as evidence that
    is caller versus input hypothesis is correct.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Mon Nov 17 21:47:52 2025
    From Newsgroup: comp.ai.philosophy

    On 11/17/2025 9:10 PM, dart200 wrote:
    On 11/17/25 7:07 PM, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string
    describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to compute >>> whether the machine described halts

    But that isn't true; you certainly can do that. Just not using one
    unified algorithm that works for absolutely all such strings.

    When it /does/ work, it's certainly not based on any input other than
    the string.

    yes i meant generally

    you also can't compute generally whether you can or cannot compute
    whether a an machine description halts or not


    The halting problem requires that halt decider
    H on input D that calls H(D) to report on behavior
    that is not the behavior that this actual input
    actually specifies.

    Turing machine deciders only compute a mapping from
    their [finite string] inputs to an accept or reject
    state on the basis that this [finite string] input
    specifies or fails to specify a semantic or syntactic
    property.

    HHH simulates DD that calls HHH(DD)
    that simulates DD that calls HHH(DD)...

    HHH1 simulates DD that calls HHH(DD)
    that returns to DD that returns to HHH1.

    The sound basis of this reasoning is the
    semantics of the C programming language.
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Mon Nov 17 22:07:05 2025
    From Newsgroup: comp.ai.philosophy

    On 11/17/2025 9:45 PM, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 7:07 PM, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string
    describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to compute >>>> whether the machine described halts

    But that isn't true; you certainly can do that. Just not using one
    unified algorithm that works for absolutely all such strings.

    When it /does/ work, it's certainly not based on any input other than
    the string.

    yes i meant generally

    But polocott means something else. He keeps insisting (without any
    rational justification) that the conventional halting problem,
    when "H" is presented with the diagonal "D" case, is asking
    "H" to decide something which is not the finite string input.

    He believes that D literally calls the same instance of H in
    the same program image, which is the only way it can be,
    and thus D is H's caller. And thus H is being asked to decide
    about its caller. But the caller is not the parameter D, but
    an activated procedure. Therefore H is being asked to decide something
    about an activated procedure and not its finite string parameter.

    *From the bottom of page 319 has been adapted to this* https://www.liarparadox.org/Peter_Linz_HP_317-320.pdf

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.∞, // accept state
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn // reject state

    *Keep repeating unless aborted*
    (a) Ĥ copies its input ⟨Ĥ⟩
    (b) Ĥ invokes embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
    (c) embedded_H simulates ⟨Ĥ⟩ ⟨Ĥ⟩

    Original Linz Turing Machine H applied to ⟨Ĥ⟩
    H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qy // accept state
    H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qn // reject state
    Would simply transition to H.qy when Ĥ ⟨Ĥ⟩ transitions to Ĥ.qn

    When H and Ĥ.embedded_H can recognize the repeating
    pattern then

    Ĥ.embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ specifies a different sequence
    of configurations than H ⟨Ĥ⟩ ⟨Ĥ⟩


    *That is the same thing as this in C*

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    HHH simulates DD that calls HHH(DD)
    that simulates DD that calls HHH(DD)...

    HHH1 simulates DD that calls HHH(DD)
    that returns to DD that returns to HHH1.

    The sound basis of this reasoning is the
    semantics of the C programming language.
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Mon Nov 17 21:18:37 2025
    From Newsgroup: comp.ai.philosophy

    On 11/17/25 7:36 PM, Chris M. Thomasson wrote:
    On 11/17/2025 7:10 PM, dart200 wrote:
    On 11/17/25 7:07 PM, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string
    describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to
    compute
    whether the machine described halts

    But that isn't true; you certainly can do that. Just not using one
    unified algorithm that works for absolutely all such strings.

    When it /does/ work, it's certainly not based on any input other than
    the string.

    yes i meant generally

    you also can't compute generally whether you can or cannot compute
    whether a an machine description halts or not


    Didn't you suggest you have a solution to the halting problem using reflection?

    yes, i was speaking to the consensus understanding in what you've quoted
    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Tue Nov 18 15:10:42 2025
    From Newsgroup: comp.ai.philosophy

    On 11/17/2025 9:18 PM, dart200 wrote:
    On 11/17/25 7:36 PM, Chris M. Thomasson wrote:
    On 11/17/2025 7:10 PM, dart200 wrote:
    On 11/17/25 7:07 PM, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string >>>>>>> describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to
    compute
    whether the machine described halts

    But that isn't true; you certainly can do that. Just not using one
    unified algorithm that works for absolutely all such strings.

    When it /does/ work, it's certainly not based on any input other than
    the string.

    yes i meant generally

    you also can't compute generally whether you can or cannot compute
    whether a an machine description halts or not


    Didn't you suggest you have a solution to the halting problem using
    reflection?

    yes, i was speaking to the consensus understanding in what you've quoted


    Okay. Well, 100% per-path coverage is one way we can say that DD halts
    _and_ does not halt. I made the fuzzer for Olcotts DD for fun. Its NOT a solution to the halting problem. Actually, he raised some red flags in
    my mind when he tried to tell me that BASIC cannot handle recursion... Programming BASIC brings back memories of when I was a little kid.

    Actually, you should be able to mock up your reflection system. Have you
    made any headway?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mike Terry@news.dead.person.stones@darjeeling.plus.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Tue Nov 18 23:47:36 2025
    From Newsgroup: comp.ai.philosophy

    On 18/11/2025 03:10, dart200 wrote:
    On 11/17/25 7:07 PM, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string
    describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to compute >>> whether the machine described halts

    But that isn't true; you certainly can do that. Just not using one
    unified algorithm that works for absolutely all such strings.

    When it /does/ work, it's certainly not based on any input other than
    the string.

    yes i meant generally

    you also can't compute generally whether you can or cannot compute whether a an machine description
    halts or not

    What does that mean though?

    It sounds like you're asking for a /single/ TM that given /any/ machine description D, must compute
    "whether or not D's halting is computable". [And saying no such single TM exists?]

    The problem is in the phrase within quotes. Surely that phrase means "whether or not there exists a
    TM that computes whether the given D halts or not"? If not, what does it mean?


    Mike.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 00:13:16 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-11-18, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:
    On 18/11/2025 03:10, dart200 wrote:
    yes i meant generally

    you also can't compute generally whether you can or cannot compute whether a an machine description
    halts or not

    What does that mean though?

    It sounds like you're asking for a /single/ TM that given /any/ machine description D, must compute
    "whether or not D's halting is computable". [And saying no such single TM exists?]

    Since the halting of any machine /is/ individually computable, then that appears false. We can compute whether it is computable whether a single,
    given machine halts. We can compute that with the word "True".

    for_all (M) : is_halting_computable(M) = T

    For every machine, halting is computable --- just not by
    an algorithm that also works for all other machines since there
    is no such thing.

    Another way to look at an existential rephrasing:

    not ( some (M) : is_halting_computable(M) = F )

    It is false that there exist machines whose halting is
    individually incomputable.

    That has been specific misconception that Olcott labored under for
    many years. He showed clear signs of believing that the D template
    program is /one/ function which is not decidable by any H.

    It's not clear if he has been fully disabused of this notion,
    years of unbridled abuse notwithstanding.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Tue Nov 18 18:17:48 2025
    From Newsgroup: comp.ai.philosophy

    On 11/18/2025 5:47 PM, Mike Terry wrote:
    On 18/11/2025 03:10, dart200 wrote:
    On 11/17/25 7:07 PM, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string
    describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to
    compute
    whether the machine described halts

    But that isn't true; you certainly can do that. Just not using one
    unified algorithm that works for absolutely all such strings.

    When it /does/ work, it's certainly not based on any input other than
    the string.

    yes i meant generally

    you also can't compute generally whether you can or cannot compute
    whether a an machine description halts or not

    What does that mean though?

    It sounds like you're asking for a /single/ TM that given /any/ machine description D, must compute "whether or not D's halting is computable". [And saying no such single TM exists?]

    The problem is in the phrase within quotes.  Surely that phrase means "whether or not there exists a TM that computes whether the given D
    halts or not"?  If not, what does it mean?


    Mike.


    typedef int (*ptr)();
    int HHH(ptr P);
    int HHH1(ptr P);

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    int main()
    {
    HHH(DD);
    }

    HHH simulates DD that calls HHH(DD)
    that simulates DD that calls HHH(DD)...

    HHH1 simulates DD that calls HHH(DD) that
    returns to DD that returns to HHH1.

    The behavior of DD simulated by HHH1 is the
    same as the behavior of DD() executed from main.

    The sound basis of this reasoning is the
    semantics of the C programming language.

    (a) Halt deciders are required to report on the
    actual behavior that their actual input actually
    specifies.

    (b) The halting problem requires Halt deciders to
    report on other than the actual behavior that their
    actual input actually specifies making the halting
    problem incorrect.
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mike Terry@news.dead.person.stones@darjeeling.plus.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 00:57:20 2025
    From Newsgroup: comp.ai.philosophy

    On 19/11/2025 00:13, Kaz Kylheku wrote:
    On 2025-11-18, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:
    On 18/11/2025 03:10, dart200 wrote:
    yes i meant generally

    you also can't compute generally whether you can or cannot compute whether a an machine description
    halts or not

    What does that mean though?

    It sounds like you're asking for a /single/ TM that given /any/ machine description D, must compute
    "whether or not D's halting is computable". [And saying no such single TM exists?]

    Since the halting of any machine /is/ individually computable, then that appears false. We can compute whether it is computable whether a single, given machine halts. We can compute that with the word "True".

    for_all (M) : is_halting_computable(M) = T

    For every machine, halting is computable --- just not by
    an algorithm that also works for all other machines since there
    is no such thing.

    Another way to look at an existential rephrasing:

    not ( some (M) : is_halting_computable(M) = F )

    It is false that there exist machines whose halting is
    individually incomputable.

    Right, I realise that.

    So my question "What does that mean though?" is still to be answered (by nick). If Nick's reply is
    that it means what I described, then that will mark Nick's claim as false.

    Maybe nick meant something else though.


    That has been specific misconception that Olcott labored under for
    many years. He showed clear signs of believing that the D template
    program is /one/ function which is not decidable by any H.

    It's not clear if he has been fully disabused of this notion,
    years of unbridled abuse notwithstanding.

    Well, he still chooses wordings that invite that confusion, when he has been presented with
    alternatives that are both neutral and correct. I would say he does that, because one of his
    underlying confusions is exactly what you describe. He wants to say something that will /sound/
    reasonable to casual listeners, but also keeps his confusion "on the table".


    Mike.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Tue Nov 18 17:40:09 2025
    From Newsgroup: comp.ai.philosophy

    On 11/18/25 3:10 PM, Chris M. Thomasson wrote:
    On 11/17/2025 9:18 PM, dart200 wrote:
    On 11/17/25 7:36 PM, Chris M. Thomasson wrote:
    On 11/17/2025 7:10 PM, dart200 wrote:
    On 11/17/25 7:07 PM, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string >>>>>>>> describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to
    compute
    whether the machine described halts

    But that isn't true; you certainly can do that. Just not using one
    unified algorithm that works for absolutely all such strings.

    When it /does/ work, it's certainly not based on any input other than >>>>> the string.

    yes i meant generally

    you also can't compute generally whether you can or cannot compute
    whether a an machine description halts or not


    Didn't you suggest you have a solution to the halting problem using
    reflection?

    yes, i was speaking to the consensus understanding in what you've quoted


    Okay. Well, 100% per-path coverage is one way we can say that DD halts
    _and_ does not halt. I made the fuzzer for Olcotts DD for fun. Its NOT a solution to the halting problem. Actually, he raised some red flags in
    my mind when he tried to tell me that BASIC cannot handle recursion...

    depends on which BASIC tho, eh?

    Programming BASIC brings back memories of when I was a little kid.

    Actually, you should be able to mock up your reflection system. Have you made any headway?

    not at all

    i'm working on the logical consistency of the theory, which is going to
    be far simpler than actual implementation

    i'm currently a bit stumped on dealing with a possible a halting paradox constructed within RTMs, using an RTM simulating a TM simulating an RTM.
    this chain similarly mechanically cuts off the required information to
    avoid a paradox, kinda like a TM alone. not fully confident it's a
    problem or not

    i may write a post on it
    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Tue Nov 18 19:46:53 2025
    From Newsgroup: comp.ai.philosophy

    On 11/18/2025 7:40 PM, dart200 wrote:
    On 11/18/25 3:10 PM, Chris M. Thomasson wrote:
    On 11/17/2025 9:18 PM, dart200 wrote:
    On 11/17/25 7:36 PM, Chris M. Thomasson wrote:
    On 11/17/2025 7:10 PM, dart200 wrote:
    On 11/17/25 7:07 PM, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string >>>>>>>>> describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to >>>>>>> compute
    whether the machine described halts

    But that isn't true; you certainly can do that. Just not using one >>>>>> unified algorithm that works for absolutely all such strings.

    When it /does/ work, it's certainly not based on any input other than >>>>>> the string.

    yes i meant generally

    you also can't compute generally whether you can or cannot compute
    whether a an machine description halts or not


    Didn't you suggest you have a solution to the halting problem using
    reflection?

    yes, i was speaking to the consensus understanding in what you've quoted >>>

    Okay. Well, 100% per-path coverage is one way we can say that DD halts
    _and_ does not halt. I made the fuzzer for Olcotts DD for fun. Its NOT
    a solution to the halting problem. Actually, he raised some red flags
    in my mind when he tried to tell me that BASIC cannot handle recursion...

    depends on which BASIC tho, eh?

    Programming BASIC brings back memories of when I was a little kid.

    Actually, you should be able to mock up your reflection system. Have
    you made any headway?

    not at all

    i'm working on the logical consistency of the theory, which is going to
    be far simpler than actual implementation

    i'm currently a bit stumped on dealing with a possible a halting paradox constructed within RTMs, using an RTM simulating a TM simulating an RTM. this chain similarly mechanically cuts off the required information to
    avoid a paradox, kinda like a TM alone. not fully confident it's a
    problem or not

    i may write a post on it


    When input DD does the opposite of whatever value
    HHH determines then the HHH/DD combination is
    merely the Liar Paradox in disguise. In this
    case the correct response to to reject this input
    as semantically ill-formed.

    Can Carol correctly answer “no” to this (yes/no) question?
    E C R Hehner. Objective and Subjective Specifications
    WST Workshop on Termination, Oxford. 2018 July 18.
    See https://www.cs.toronto.edu/~hehner/OSS.pdf
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 17:17:42 2025
    From Newsgroup: comp.ai.philosophy

    On 19/11/2025 01:40, dart200 wrote:

    i'm currently a bit stumped on dealing with a possible a halting paradox constructed within RTMs, using an RTM simulating a TM simulating an RTM.
    this chain similarly mechanically cuts off the required information to
    avoid a paradox, kinda like a TM alone. not fully confident it's a
    problem or not

    It sounds equivalent to problems of security wrt. leaky sandboxes.
    Interesting stuff. Maybe valuable too.


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 17:41:56 2025
    From Newsgroup: comp.ai.philosophy

    On 18/11/2025 03:45, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 7:07 PM, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string
    describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to compute >>>> whether the machine described halts

    But that isn't true; you certainly can do that. Just not using one
    unified algorithm that works for absolutely all such strings.

    When it /does/ work, it's certainly not based on any input other than
    the string.

    yes i meant generally

    But polocott means something else. He keeps insisting (without any
    rational justification) that the conventional halting problem,
    when "H" is presented with the diagonal "D" case, is asking
    "H" to decide something which is not the finite string input.

    Some things to consider in evaluating Olcott's inability to analyse his
    doubts:

    (1) The halting problem *as described to him*
    (2)
    (i) If H(P) is the recursion, then the nonobviousness of the
    constructibility of a copy of the original program text P from a
    contractum of the program text
    (ii) The nonobviousness or impermissibility (presumed or otherwise) of
    the equality H(P') = H(P) where P' is some contractum of P.


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 10:26:18 2025
    From Newsgroup: comp.ai.philosophy

    On 11/18/25 3:47 PM, Mike Terry wrote:
    On 18/11/2025 03:10, dart200 wrote:
    On 11/17/25 7:07 PM, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string
    describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to
    compute
    whether the machine described halts

    But that isn't true; you certainly can do that. Just not using one
    unified algorithm that works for absolutely all such strings.

    When it /does/ work, it's certainly not based on any input other than
    the string.

    yes i meant generally

    you also can't compute generally whether you can or cannot compute
    whether a an machine description halts or not

    What does that mean though?

    It sounds like you're asking for a /single/ TM that given /any/ machine description D, must compute "whether or not D's halting is computable". [And saying no such single TM exists?]

    yes, it takes /single/ machine input a outputs whether /any/ other
    machine could compute the input machine's halting semantics.

    The problem is in the phrase within quotes.  Surely that phrase means "whether or not there exists a TM that computes whether the given D
    halts or not"?  If not, what does it mean?


    i think you've got it


    Mike.

    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 12:37:35 2025
    From Newsgroup: comp.ai.philosophy

    On 11/19/2025 11:41 AM, Tristan Wibberley wrote:
    On 18/11/2025 03:45, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 7:07 PM, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string >>>>>>> describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to compute >>>>> whether the machine described halts

    But that isn't true; you certainly can do that. Just not using one
    unified algorithm that works for absolutely all such strings.

    When it /does/ work, it's certainly not based on any input other than
    the string.

    yes i meant generally

    But polocott means something else. He keeps insisting (without any
    rational justification) that the conventional halting problem,
    when "H" is presented with the diagonal "D" case, is asking
    "H" to decide something which is not the finite string input.

    Some things to consider in evaluating Olcott's inability to analyse his doubts:

    (1) The halting problem *as described to him*
    (2)
    (i) If H(P) is the recursion, then the nonobviousness of the constructibility of a copy of the original program text P from a
    contractum of the program text
    (ii) The nonobviousness or impermissibility (presumed or otherwise) of
    the equality H(P') = H(P) where P' is some contractum of P.


    The input to HHH(DD) does not behave the
    same as DD called from main:
    HHH simulates DD that calls HHH(DD)
    that simulates DD that calls HHH(DD)...

    The input to HHH1(DD) behaves the same
    as DD called from main:
    HHH1 simulates DD that calls HHH(DD) that
    returns to DD that returns to HHH1.

    The halting problem requires HHH to report
    on behavior other than the behavior encoded
    in HHH/DD.


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 10:43:08 2025
    From Newsgroup: comp.ai.philosophy

    On 11/19/25 9:17 AM, Tristan Wibberley wrote:
    On 19/11/2025 01:40, dart200 wrote:

    i'm currently a bit stumped on dealing with a possible a halting paradox
    constructed within RTMs, using an RTM simulating a TM simulating an RTM.
    this chain similarly mechanically cuts off the required information to
    avoid a paradox, kinda like a TM alone. not fully confident it's a
    problem or not

    It sounds equivalent to problems of security wrt. leaky sandboxes. Interesting stuff. Maybe valuable too.

    i'm actually pretty distraught over this rn. who's gunna care if all i
    did was reframe the halting problem?? i'm stuck on quite literally a
    liar's paradox, with emphasis on a clear lie taking place

    specifically: the simulated TM simulating an RTM is lying about the true runtime context, bamboozling reflection's ability to prevent paradox construction

    und = () -> {
    simTM {
    if ( simRTM{halts(und)} )
    loop_forever()
    else
    return
    }
    }

    i don't actually know if this is valid tho. within RTMs, when a simRTM simulates a RELFECT operation, it also must call REFLECT to get the
    runtime context from whatever is running it. since TMs don't support
    this, the simRTM run within simTM cannot do this, and therefore it's not technically a per-specification RTM simulation. it's actually a hackjob
    lying about the true runtime context

    but i'm still not sure what's supposed to happen. maybe there's a way to reckon about this, maybe i just blew that damned incompleteness hole in
    my reflective turing machine theory cause of fucking liars

    also, who tf would publish any of this? you can't get "maybe
    interesting" ideas into a journal, that's not good enough for the 100% always-right rat race used to justify the meritocratic oppression
    mainstream economic ideology runs off of

    syntax note: curly bases are used to specify an unnamed lambda function
    as a function parameter (kotlin inspired)

    simRTM{halts(und)} is equivalent to simRTM(() -> halts(und))
    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 18:48:44 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-11-19, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/19/25 9:17 AM, Tristan Wibberley wrote:
    On 19/11/2025 01:40, dart200 wrote:

    i'm currently a bit stumped on dealing with a possible a halting paradox >>> constructed within RTMs, using an RTM simulating a TM simulating an RTM. >>> this chain similarly mechanically cuts off the required information to
    avoid a paradox, kinda like a TM alone. not fully confident it's a
    problem or not

    It sounds equivalent to problems of security wrt. leaky sandboxes.
    Interesting stuff. Maybe valuable too.

    i'm actually pretty distraught over this rn. who's gunna care if all i
    did was reframe the halting problem?? i'm stuck on quite literally a
    liar's paradox, with emphasis on a clear lie taking place

    specifically: the simulated TM simulating an RTM is lying about the true runtime context, bamboozling reflection's ability to prevent paradox construction

    Don't you have mechanisms to prevent the procedures from being
    able to manipulate the environment?

    und = () -> {
    simTM {
    if ( simRTM{halts(und)} )
    loop_forever()
    else
    return
    }
    }

    So in ths above construction, simTM creates a contour around a new
    context, which is empty?

    If so, am I wrong in remembering that I might have mentioned something
    like this, and didn't you say you would just ban such constructs
    from the sandbox?
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 11:19:13 2025
    From Newsgroup: comp.ai.philosophy

    On 11/19/25 10:48 AM, Kaz Kylheku wrote:
    On 2025-11-19, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/19/25 9:17 AM, Tristan Wibberley wrote:
    On 19/11/2025 01:40, dart200 wrote:

    i'm currently a bit stumped on dealing with a possible a halting paradox >>>> constructed within RTMs, using an RTM simulating a TM simulating an RTM. >>>> this chain similarly mechanically cuts off the required information to >>>> avoid a paradox, kinda like a TM alone. not fully confident it's a
    problem or not

    It sounds equivalent to problems of security wrt. leaky sandboxes.
    Interesting stuff. Maybe valuable too.

    i'm actually pretty distraught over this rn. who's gunna care if all i
    did was reframe the halting problem?? i'm stuck on quite literally a
    liar's paradox, with emphasis on a clear lie taking place

    specifically: the simulated TM simulating an RTM is lying about the true
    runtime context, bamboozling reflection's ability to prevent paradox
    construction

    Don't you have mechanisms to prevent the procedures from being
    able to manipulate the environment?

    und = () -> {
    simTM {
    if ( simRTM{halts(und)} )
    loop_forever()
    else
    return
    }
    }

    So in ths above construction, simTM creates a contour around a new
    context, which is empty?

    essentially yes. simTM does not support REFLECT, so simulations within
    the simulation have no method of accessing the runtime context, creating
    the illusion (or lie) of an null context


    If so, am I wrong in remembering that I might have mentioned something
    like this, and didn't you say you would just ban such constructs
    from the sandbox?


    you did indeed mention something like this, and i did indeed wish to ban those, but now that i understand how the specific mechanisms of my ban
    would work, idk if i can

    maybe there still is some mechanism i haven't thot of,

    or perhaps it can be proven that nothing uniquely computable exists in
    that subset of computations- that all computations run within simTM
    either can be computed by some algo without simTM, or are undecidable, therefore partitioning off the problematic section of general computing
    (which we also can't do rn either)
    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mike Terry@news.dead.person.stones@darjeeling.plus.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 19:42:10 2025
    From Newsgroup: comp.ai.philosophy

    On 19/11/2025 18:26, dart200 wrote:
    On 11/18/25 3:47 PM, Mike Terry wrote:
    On 18/11/2025 03:10, dart200 wrote:
    On 11/17/25 7:07 PM, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string >>>>>>> describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to compute >>>>> whether the machine described halts

    But that isn't true; you certainly can do that. Just not using one
    unified algorithm that works for absolutely all such strings.

    When it /does/ work, it's certainly not based on any input other than
    the string.

    yes i meant generally

    you also can't compute generally whether you can or cannot compute whether a an machine
    description halts or not

    What does that mean though?

    It sounds like you're asking for a /single/ TM that given /any/ machine description D, must
    compute "whether or not D's halting is computable". [And saying no such single TM exists?]

    yes, it takes /single/ machine input a outputs whether /any/ other machine could compute the input
    machine's halting semantics.

    Have you read Kaz's response to my post? That explains why for any given machine, there is always
    some other machine that computes the halting status of that machine. Basically there are only two
    possible behaviours: halts or neverhalts. We just need two machines H1 and H0 that straight away
    return halts/neverhalts respectively. For any machine M, either H1 or H0 correctly computes M's
    halting status, so assuming normal terminology use, any single M is decideable. (And by extension,
    halting for any finite set of machines is decidable.)

    Sometimes people attempt to come up with reasons why H1 and H0 don't count. That was certainly PO's
    response, and his explanation was that H1 and H0 are disqualified as halt deciders because they
    "aren't even trying". He has never explained what it means for a TM to "not really try" to do
    something; of course, TMs are just what they are, without "trying" to do anything. We're not
    talking about an olympic sport where there are points awarded for effort/artistic interpretation
    etc., it's all just "whether they work".

    [Also, people like PO often confuse what the halting problem says, believing that it is implying
    that there is some machine M which "cannot be decided". That's a misunderstanding...]

    Anyhow, all of that is completely missing the point of the halting problem - that is to find /one/
    machine H that can decide /any/ input M_desc. Finding a machine that can decide one specific input
    is trivial.


    Mike.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 19:47:50 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-11-19, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/19/25 10:48 AM, Kaz Kylheku wrote:
    On 2025-11-19, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/19/25 9:17 AM, Tristan Wibberley wrote:
    On 19/11/2025 01:40, dart200 wrote:

    i'm currently a bit stumped on dealing with a possible a halting paradox >>>>> constructed within RTMs, using an RTM simulating a TM simulating an RTM. >>>>> this chain similarly mechanically cuts off the required information to >>>>> avoid a paradox, kinda like a TM alone. not fully confident it's a
    problem or not

    It sounds equivalent to problems of security wrt. leaky sandboxes.
    Interesting stuff. Maybe valuable too.

    i'm actually pretty distraught over this rn. who's gunna care if all i
    did was reframe the halting problem?? i'm stuck on quite literally a
    liar's paradox, with emphasis on a clear lie taking place

    specifically: the simulated TM simulating an RTM is lying about the true >>> runtime context, bamboozling reflection's ability to prevent paradox
    construction

    Don't you have mechanisms to prevent the procedures from being
    able to manipulate the environment?

    und = () -> {
    simTM {
    if ( simRTM{halts(und)} )
    loop_forever()
    else
    return
    }
    }

    So in ths above construction, simTM creates a contour around a new
    context, which is empty?

    essentially yes. simTM does not support REFLECT, so simulations within
    the simulation have no method of accessing the runtime context, creating
    the illusion (or lie) of an null context

    In a computational system with context, functions do not have a halting
    status that depends only on their arguments, but on their arguments plus context.

    Therefore, the question "does this function halt when applied to these arguments" isn't right in this domain; it needs to be "does this function,
    in a context with such and such content, and these arguments, halt".

    Then, to have a diagonal case whch opposes the decider, that diagonal
    case has to be sure to be using that same context, otherwise it
    is not diagonal; i.e.

    in_context C { // <-- but but construct is banned!

    // D, in context C "behaves opposite" to the decision
    // produced by H regarding D in context C:

    D() {
    if (H(D, C))
    loop();
    }
    }

    Or:

    D() {
    let C = getParentContext(); // likewise banned?

    if (H(D, C))
    loop();
    }
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 14:18:30 2025
    From Newsgroup: comp.ai.philosophy

    On 11/19/2025 12:43 PM, dart200 wrote:
    On 11/19/25 9:17 AM, Tristan Wibberley wrote:
    On 19/11/2025 01:40, dart200 wrote:

    i'm currently a bit stumped on dealing with a possible a halting paradox >>> constructed within RTMs, using an RTM simulating a TM simulating an RTM. >>> this chain similarly mechanically cuts off the required information to
    avoid a paradox, kinda like a TM alone. not fully confident it's a
    problem or not

    It sounds equivalent to problems of security wrt. leaky sandboxes.
    Interesting stuff. Maybe valuable too.

    i'm actually pretty distraught over this rn. who's gunna care if all i
    did was reframe the halting problem?? i'm stuck on quite literally a
    liar's paradox, with emphasis on a clear lie taking place


    The current halting problem where a halt decider H
    is required to correctly report on the halt status
    of an input D that does the opposite of whatever
    value that H reports is the Liar Paradox for
    this specific H/D pair.

    Can Carol correctly answer “no” to this (yes/no) question?
    E C R Hehner. Objective and Subjective Specifications
    WST Workshop on Termination, Oxford. 2018 July 18.
    See https://www.cs.toronto.edu/~hehner/OSS.pdf

    There are several ways to address this, all of them
    are that the halting problem exactly as defined is
    incorrect in one way or another.

    specifically: the simulated TM simulating an RTM is lying about the true runtime context, bamboozling reflection's ability to prevent paradox construction

    und = () -> {
      simTM {
        if ( simRTM{halts(und)} )
          loop_forever()
        else
          return
      }
    }

    i don't actually know if this is valid tho. within RTMs, when a simRTM simulates a RELFECT operation, it also must call REFLECT to get the
    runtime context from whatever is running it. since TMs don't support
    this, the simRTM run within simTM cannot do this, and therefore it's not technically a per-specification RTM simulation. it's actually a hackjob lying about the true runtime context

    but i'm still not sure what's supposed to happen. maybe there's a way to reckon about this, maybe i just blew that damned incompleteness hole in
    my reflective turing machine theory cause of fucking liars

    also, who tf would publish any of this? you can't get "maybe
    interesting" ideas into a journal, that's not good enough for the 100% always-right rat race used to justify the meritocratic oppression
    mainstream economic ideology runs off of

    syntax note: curly bases are used to specify an unnamed lambda function
    as a function parameter (kotlin inspired)

    simRTM{halts(und)} is equivalent to simRTM(() -> halts(und))

    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 14:45:12 2025
    From Newsgroup: comp.ai.philosophy

    On 11/19/2025 1:42 PM, Mike Terry wrote:
    On 19/11/2025 18:26, dart200 wrote:
    On 11/18/25 3:47 PM, Mike Terry wrote:
    On 18/11/2025 03:10, dart200 wrote:
    On 11/17/25 7:07 PM, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string >>>>>>>> describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to
    compute
    whether the machine described halts

    But that isn't true; you certainly can do that. Just not using one
    unified algorithm that works for absolutely all such strings.

    When it /does/ work, it's certainly not based on any input other than >>>>> the string.

    yes i meant generally

    you also can't compute generally whether you can or cannot compute
    whether a an machine description halts or not

    What does that mean though?

    It sounds like you're asking for a /single/ TM that given /any/
    machine description D, must compute "whether or not D's halting is
    computable". [And saying no such single TM exists?]

    yes, it takes /single/ machine input a outputs whether /any/ other
    machine could compute the input machine's halting semantics.

    Have you read Kaz's response to my post?  That explains why for any
    given machine, there is always some other machine that computes the
    halting status of that machine.  Basically there are only two possible behaviours: halts or neverhalts.

    Can Carol correctly answer “no” to this (yes/no) question?
    E C R Hehner. Objective and Subjective Specifications
    WST Workshop on Termination, Oxford. 2018 July 18.
    See https://www.cs.toronto.edu/~hehner/OSS.pdf

    For decider H and input D pair where D does
    the opposite of whatever H reports we only
    have the Liar Paradox. The Liar Paradox
    is semantically unsound.

      We just need two machines H1 and H0
    that straight away return halts/neverhalts respectively.  For any
    machine M, either H1 or H0 correctly computes M's halting status, so assuming normal terminology use, any single M is decideable.  (And by extension, halting for any finite set of machines is decidable.)

    Sometimes people attempt to come up with reasons why H1 and H0 don't count.  That was certainly PO's response, and his explanation was that
    H1 and H0 are disqualified as halt deciders because they "aren't even trying".  He has never explained what it means for a TM to "not really
    try" to do something; of course, TMs are just what they are, without "trying" to do anything.  We're not talking about an olympic sport where there are points awarded for effort/artistic interpretation etc., it's
    all just "whether they work".

    [Also, people like PO often confuse what the halting problem says,
    believing that it is implying that there is some machine M which "cannot
    be decided".  That's a misunderstanding...]

    Anyhow, all of that is completely missing the point of the halting
    problem - that is to find /one/ machine H that can decide /any/ input M_desc.  Finding a machine that can decide one specific input is trivial.


    Mike.

    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 14:49:50 2025
    From Newsgroup: comp.ai.philosophy

    On 11/19/2025 1:47 PM, Kaz Kylheku wrote:
    On 2025-11-19, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/19/25 10:48 AM, Kaz Kylheku wrote:
    On 2025-11-19, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/19/25 9:17 AM, Tristan Wibberley wrote:
    On 19/11/2025 01:40, dart200 wrote:

    i'm currently a bit stumped on dealing with a possible a halting paradox >>>>>> constructed within RTMs, using an RTM simulating a TM simulating an RTM. >>>>>> this chain similarly mechanically cuts off the required information to >>>>>> avoid a paradox, kinda like a TM alone. not fully confident it's a >>>>>> problem or not

    It sounds equivalent to problems of security wrt. leaky sandboxes.
    Interesting stuff. Maybe valuable too.

    i'm actually pretty distraught over this rn. who's gunna care if all i >>>> did was reframe the halting problem?? i'm stuck on quite literally a
    liar's paradox, with emphasis on a clear lie taking place

    specifically: the simulated TM simulating an RTM is lying about the true >>>> runtime context, bamboozling reflection's ability to prevent paradox
    construction

    Don't you have mechanisms to prevent the procedures from being
    able to manipulate the environment?

    und = () -> {
    simTM {
    if ( simRTM{halts(und)} )
    loop_forever()
    else
    return
    }
    }

    So in ths above construction, simTM creates a contour around a new
    context, which is empty?

    essentially yes. simTM does not support REFLECT, so simulations within
    the simulation have no method of accessing the runtime context, creating
    the illusion (or lie) of an null context

    In a computational system with context, functions do not have a halting status that depends only on their arguments, but on their arguments plus context.

    Therefore, the question "does this function halt when applied to these arguments" isn't right in this domain; it needs to be "does this function,
    in a context with such and such content, and these arguments, halt".

    Then, to have a diagonal case whch opposes the decider, that diagonal
    case has to be sure to be using that same context, otherwise it
    is not diagonal; i.e.

    in_context C { // <-- but but construct is banned!

    // D, in context C "behaves opposite" to the decision
    // produced by H regarding D in context C:

    D() {
    if (H(D, C))
    loop();
    }
    }

    Or:

    D() {
    let C = getParentContext(); // likewise banned?

    if (H(D, C))
    loop();
    }




    Looks interesting. I adapted AWK to be very helpful
    for maintenance programming of million line software systems.

    https://stackoverflow.com/search?q=is%3aanswer%20TXR
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 12:51:02 2025
    From Newsgroup: comp.ai.philosophy

    On 11/19/2025 11:42 AM, Mike Terry wrote:
    On 19/11/2025 18:26, dart200 wrote:
    On 11/18/25 3:47 PM, Mike Terry wrote:
    On 18/11/2025 03:10, dart200 wrote:
    On 11/17/25 7:07 PM, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string >>>>>>>> describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to
    compute
    whether the machine described halts

    But that isn't true; you certainly can do that. Just not using one
    unified algorithm that works for absolutely all such strings.

    When it /does/ work, it's certainly not based on any input other than >>>>> the string.

    yes i meant generally

    you also can't compute generally whether you can or cannot compute
    whether a an machine description halts or not

    What does that mean though?

    It sounds like you're asking for a /single/ TM that given /any/
    machine description D, must compute "whether or not D's halting is
    computable". [And saying no such single TM exists?]

    yes, it takes /single/ machine input a outputs whether /any/ other
    machine could compute the input machine's halting semantics.

    Have you read Kaz's response to my post?  That explains why for any
    given machine, there is always some other machine that computes the
    halting status of that machine.  Basically there are only two possible behaviours: halts or neverhalts.  We just need two machines H1 and H0
    that straight away return halts/neverhalts respectively.  For any
    machine M, either H1 or H0 correctly computes M's halting status, so assuming normal terminology use, any single M is decideable.  (And by extension, halting for any finite set of machines is decidable.)

    Sometimes people attempt to come up with reasons why H1 and H0 don't count.  That was certainly PO's response, and his explanation was that
    H1 and H0 are disqualified as halt deciders because they "aren't even trying".  He has never explained what it means for a TM to "not really
    try" to do something; of course, TMs are just what they are, without "trying" to do anything.  We're not talking about an olympic sport where there are points awarded for effort/artistic interpretation etc., it's
    all just "whether they work".

    [Also, people like PO often confuse what the halting problem says,
    believing that it is implying that there is some machine M which "cannot
    be decided".  That's a misunderstanding...]

    Anyhow, all of that is completely missing the point of the halting
    problem - that is to find /one/ machine H that can decide /any/ input M_desc.  Finding a machine that can decide one specific input is trivial.

    Right. Any machine can have a specialized decider for it. However, there
    is no _single_ decider for all machines...

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 20:55:43 2025
    From Newsgroup: comp.ai.philosophy

    On 19/11/2025 18:37, olcott wrote:
    The halting problem requires HHH to report
    on behavior other than the behavior encoded
    in HHH/DD.

    Is the Halts property the same regardless?

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 13:03:11 2025
    From Newsgroup: comp.ai.philosophy

    On 11/18/2025 5:40 PM, dart200 wrote:
    On 11/18/25 3:10 PM, Chris M. Thomasson wrote:
    On 11/17/2025 9:18 PM, dart200 wrote:
    On 11/17/25 7:36 PM, Chris M. Thomasson wrote:
    On 11/17/2025 7:10 PM, dart200 wrote:
    On 11/17/25 7:07 PM, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string >>>>>>>>> describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to >>>>>>> compute
    whether the machine described halts

    But that isn't true; you certainly can do that. Just not using one >>>>>> unified algorithm that works for absolutely all such strings.

    When it /does/ work, it's certainly not based on any input other than >>>>>> the string.

    yes i meant generally

    you also can't compute generally whether you can or cannot compute
    whether a an machine description halts or not


    Didn't you suggest you have a solution to the halting problem using
    reflection?

    yes, i was speaking to the consensus understanding in what you've quoted >>>

    Okay. Well, 100% per-path coverage is one way we can say that DD halts
    _and_ does not halt. I made the fuzzer for Olcotts DD for fun. Its NOT
    a solution to the halting problem. Actually, he raised some red flags
    in my mind when he tried to tell me that BASIC cannot handle recursion...

    depends on which BASIC tho, eh?

    Touche! :^) Actually when I had some free time and nothing else to do.
    Humm... Well, for Olcott, thought to myself, let me show him a way to
    create a recursive stack in say, AppleSoft BASIC:

    https://pastebin.com/raw/Effeg8cK
    (raw text, no pastebin ad infested garbage)

    It renders a von Koch fractal from an initial line segment. The manual
    stack is there, waiting for a hacker to use it for other things. I
    thought Olcott might like it for some reason.


    Programming BASIC brings back memories of when I was a little kid.

    Actually, you should be able to mock up your reflection system. Have
    you made any headway?

    not at all

    i'm working on the logical consistency of the theory, which is going to
    be far simpler than actual implementation

    Fair enough.


    i'm currently a bit stumped on dealing with a possible a halting paradox constructed within RTMs, using an RTM simulating a TM simulating an RTM. this chain similarly mechanically cuts off the required information to
    avoid a paradox, kinda like a TM alone. not fully confident it's a
    problem or not

    i may write a post on it


    :^)
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 15:05:56 2025
    From Newsgroup: comp.ai.philosophy

    On 11/19/2025 2:55 PM, Tristan Wibberley wrote:
    On 19/11/2025 18:37, olcott wrote:
    The halting problem requires HHH to report
    on behavior other than the behavior encoded
    in HHH/DD.

    Is the Halts property the same regardless?


    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    *No it is not the same. Here is how it varies*

    HHH simulates DD that calls HHH(DD)
    that simulates DD that calls HHH(DD)...

    HHH1 simulates DD that calls HHH(DD) that
    returns to DD that returns to HHH1.

    The behavior of DD simulated by HHH1 is the
    same as the behavior of DD() executed from main.

    The sound basis of this reasoning is the
    semantics of the C programming language.


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 21:41:45 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-11-19, olcott <polcott333@gmail.com> wrote:
    The sound basis of this reasoning is the
    semantics of the C programming language.

    ... and, note, NOT of the C-compiled-to-x86 language which, after years
    of Olcott's insistence that everyone painstakingly analyze his x86
    traces (or else accept that they have no argument) all of a sudden, it
    is the case now that "essentially nobody" understands x86, presumably
    including Olcott himself.

    This is because in the C-to-x86 project, it was shown that
    DD simulated by HHH and decided as 0, but left behind a continuable
    simulation that can be stepped further toward halting.

    Well, that /cannot/ be right! So let's dodge it by declaring that
    there is something wrong, and nobody understands why due to x86
    being hard, and disavow the whole thng ... now it's all about the
    (higher level, not compiled) semantics of the C programming language.

    Conveniently, no Olcott project for simulating wth pure C semantics
    exists that anyone can download and work with; it's just handwavy talk.

    However, I showed a detailed manual trace of a simple test case showing
    that when a H decides to abort an interpretation of D after three steps
    and return 0, that interpretation can be resumed and shown to terminate;
    I showed the detailed traces of D traced by H (down to a second
    simulation level starting up), as well as the completion of the
    abandoned simulation that can easily be carried out by the framework.

    That foreshadows exactly what will happen in the unlikely event
    Olcott gets his ducks lined up and actually cobs together a C
    interpretation project capable of hosting a simulation tower.

    If he publishes the code, someone will come along and implement
    the continuaton of abandoned simulatons.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Jeff Barnett@jbb@notatt.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 16:04:33 2025
    From Newsgroup: comp.ai.philosophy

    On 11/19/2025 12:42 PM, Mike Terry wrote:
    On 19/11/2025 18:26, dart200 wrote:
    On 11/18/25 3:47 PM, Mike Terry wrote:
    On 18/11/2025 03:10, dart200 wrote:
    On 11/17/25 7:07 PM, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string >>>>>>>> describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to
    compute
    whether the machine described halts

    But that isn't true; you certainly can do that. Just not using one
    unified algorithm that works for absolutely all such strings.

    When it /does/ work, it's certainly not based on any input other than >>>>> the string.

    yes i meant generally

    you also can't compute generally whether you can or cannot compute
    whether a an machine description halts or not

    What does that mean though?

    It sounds like you're asking for a /single/ TM that given /any/
    machine description D, must compute "whether or not D's halting is
    computable". [And saying no such single TM exists?]

    yes, it takes /single/ machine input a outputs whether /any/ other
    machine could compute the input machine's halting semantics.

    Have you read Kaz's response to my post?  That explains why for any
    given machine, there is always some other machine that computes the
    halting status of that machine.  Basically there are only two possible behaviours: halts or neverhalts.  We just need two machines H1 and H0
    that straight away return halts/neverhalts respectively.  For any
    machine M, either H1 or H0 correctly computes M's halting status, so assuming normal terminology use, any single M is decideable.  (And by extension, halting for any finite set of machines is decidable.)

    Sometimes people attempt to come up with reasons why H1 and H0 don't count.  That was certainly PO's response, and his explanation was that
    H1 and H0 are disqualified as halt deciders because they "aren't even trying".  He has never explained what it means for a TM to "not really
    try" to do something; of course, TMs are just what they are, without "trying" to do anything.  We're not talking about an olympic sport where there are points awarded for effort/artistic interpretation etc., it's
    all just "whether they work".

    They don't count as *deciders* plain and simple because a *decider* must decide correctly on all possible inputs. Even a partial decider must,
    for all possible inputs, return "yes", "no", or "don't know" and must be correct when returning one of the first two. So that any machine that
    returns the same value for all inputs, is a decider in a domain where
    the onto range contains one and only one value, e.g., a halt decider
    that decides halting status for all possible non-halting (TM, data)
    input - not very interesting. Neither is the example of a decider that
    returns "don't know" for all inputs. (Just to state the obvious: when something is said to return a value halting of that something is entailed.)
    [Also, people like PO often confuse what the halting problem says,
    believing that it is implying that there is some machine M which "cannot
    be decided".  That's a misunderstanding...]

    Anyhow, all of that is completely missing the point of the halting
    problem - that is to find /one/ machine H that can decide /any/ input M_desc.  Finding a machine that can decide one specific input is trivial.--
    Jeff Barnett

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 17:43:09 2025
    From Newsgroup: comp.ai.philosophy

    On 11/19/2025 5:04 PM, Jeff Barnett wrote:
    On 11/19/2025 12:42 PM, Mike Terry wrote:
    On 19/11/2025 18:26, dart200 wrote:
    On 11/18/25 3:47 PM, Mike Terry wrote:
    On 18/11/2025 03:10, dart200 wrote:
    On 11/17/25 7:07 PM, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string >>>>>>>>> describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to >>>>>>> compute
    whether the machine described halts

    But that isn't true; you certainly can do that. Just not using one >>>>>> unified algorithm that works for absolutely all such strings.

    When it /does/ work, it's certainly not based on any input other than >>>>>> the string.

    yes i meant generally

    you also can't compute generally whether you can or cannot compute
    whether a an machine description halts or not

    What does that mean though?

    It sounds like you're asking for a /single/ TM that given /any/
    machine description D, must compute "whether or not D's halting is
    computable". [And saying no such single TM exists?]

    yes, it takes /single/ machine input a outputs whether /any/ other
    machine could compute the input machine's halting semantics.

    Have you read Kaz's response to my post?  That explains why for any
    given machine, there is always some other machine that computes the
    halting status of that machine.  Basically there are only two possible
    behaviours: halts or neverhalts.  We just need two machines H1 and H0
    that straight away return halts/neverhalts respectively.  For any
    machine M, either H1 or H0 correctly computes M's halting status, so
    assuming normal terminology use, any single M is decideable.  (And by
    extension, halting for any finite set of machines is decidable.)

    Sometimes people attempt to come up with reasons why H1 and H0 don't
    count.  That was certainly PO's response, and his explanation was that
    H1 and H0 are disqualified as halt deciders because they "aren't even
    trying".  He has never explained what it means for a TM to "not really
    try" to do something; of course, TMs are just what they are, without
    "trying" to do anything.  We're not talking about an olympic sport
    where there are points awarded for effort/artistic interpretation
    etc., it's all just "whether they work".

    They don't count as *deciders* plain and simple because a *decider* must decide correctly on all possible inputs. Even a partial decider must,
    for all possible inputs, return "yes", "no", or "don't know" and must be correct when returning one of the first two. So that any machine that returns the same value for all inputs, is a decider in a domain where
    the onto range contains one and only one value, e.g., a halt decider
    that decides halting status for all possible non-halting (TM, data)
    input - not very interesting. Neither is the example of a decider that returns "don't know" for all inputs. (Just to state the obvious: when something is said to return a value halting of that something is entailed.)

    Yes that is technically correct, yet the term partial decider
    totally befuddles newcomers. I switched to termination analyzers
    that are supposed to be correct for all program/input pairs
    which is made much simpler for programs having no inputs.

    [Also, people like PO often confuse what the halting problem says,
    believing that it is implying that there is some machine M which
    "cannot be decided".  That's a misunderstanding...]

    Anyhow, all of that is completely missing the point of the halting
    problem - that is to find /one/ machine H that can decide /any/ input
    M_desc.  Finding a machine that can decide one specific input is
    trivial.--
    Jeff Barnett

    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mike Terry@news.dead.person.stones@darjeeling.plus.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 20 00:04:27 2025
    From Newsgroup: comp.ai.philosophy

    On 19/11/2025 23:04, Jeff Barnett wrote:
    On 11/19/2025 12:42 PM, Mike Terry wrote:
    On 19/11/2025 18:26, dart200 wrote:
    On 11/18/25 3:47 PM, Mike Terry wrote:
    On 18/11/2025 03:10, dart200 wrote:
    On 11/17/25 7:07 PM, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string >>>>>>>>> describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to compute
    whether the machine described halts

    But that isn't true; you certainly can do that. Just not using one >>>>>> unified algorithm that works for absolutely all such strings.

    When it /does/ work, it's certainly not based on any input other than >>>>>> the string.

    yes i meant generally

    you also can't compute generally whether you can or cannot compute whether a an machine
    description halts or not

    What does that mean though?

    It sounds like you're asking for a /single/ TM that given /any/ machine description D, must
    compute "whether or not D's halting is computable". [And saying no such single TM exists?]

    yes, it takes /single/ machine input a outputs whether /any/ other machine could compute the
    input machine's halting semantics.

    Have you read Kaz's response to my post?  That explains why for any given machine, there is always
    some other machine that computes the halting status of that machine.  Basically there are only two
    possible behaviours: halts or neverhalts.  We just need two machines H1 and H0 that straight away
    return halts/neverhalts respectively.  For any machine M, either H1 or H0 correctly computes M's
    halting status, so assuming normal terminology use, any single M is decideable.  (And by
    extension, halting for any finite set of machines is decidable.)

    Sometimes people attempt to come up with reasons why H1 and H0 don't count.  That was certainly
    PO's response, and his explanation was that H1 and H0 are disqualified as halt deciders because
    they "aren't even trying".  He has never explained what it means for a TM to "not really try" to
    do something; of course, TMs are just what they are, without "trying" to do anything.  We're not
    talking about an olympic sport where there are points awarded for effort/artistic interpretation
    etc., it's all just "whether they work".

    They don't count as *deciders* plain and simple because a *decider* must decide correctly on all
    possible inputs.

    A decider "for a single machine" is by definition a decider for the input domain consisting of that
    single machine-description. Behaviour for other inputs is simply irrelevent.

    If you want to consider a decider whose domain consists of all input strings, then:
    a) obviously such a machine cannot be a halt decider. The Linz proof shows this.
    b) if we want a partial decider (as you describe below), then since a single
    TM-description is effectively recognisable, we could replace my H1/H0 above
    with adjusted versions H1'/H2' that first check whether their input is
    a description of the M in question. If not they return dontknow, otherwise
    they return halts/neverhalts as T1/T0 respectively.

    But this is a separate question - we are actually considering an input domain of one element.

    Even a partial decider must, for all possible inputs, return "yes", "no", or "don't
    know" and must be correct when returning one of the first two. So that any machine that returns the
    same value for all inputs, is a decider in a domain where the onto range contains one and only one
    value, e.g., a halt decider that decides halting status for all possible non-halting (TM, data)
    input - not very interesting.

    Or a domain with just one element {M_desc}

    Neither is the example of a decider that returns "don't know" for all inputs. (Just to state the obvious: when something is said to return a value halting of that
    something is entailed.)

    No, Sorry but that's Just Wrong too. Returning "don't know" for all input is valid, but not very
    interesting I admit. PO has also said what you just said! (Not that that means it's automatically
    wrong, but it's not a good sign! :) Anyhow, my H1'/H0' above do not return "don't know" for all
    inputs.)


    Mike.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 18:08:23 2025
    From Newsgroup: comp.ai.philosophy

    On 11/19/25 11:42 AM, Mike Terry wrote:
    On 19/11/2025 18:26, dart200 wrote:
    On 11/18/25 3:47 PM, Mike Terry wrote:
    On 18/11/2025 03:10, dart200 wrote:
    On 11/17/25 7:07 PM, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string >>>>>>>> describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to
    compute
    whether the machine described halts

    But that isn't true; you certainly can do that. Just not using one
    unified algorithm that works for absolutely all such strings.

    When it /does/ work, it's certainly not based on any input other than >>>>> the string.

    yes i meant generally

    you also can't compute generally whether you can or cannot compute
    whether a an machine description halts or not

    What does that mean though?

    It sounds like you're asking for a /single/ TM that given /any/
    machine description D, must compute "whether or not D's halting is
    computable". [And saying no such single TM exists?]

    yes, it takes /single/ machine input a outputs whether /any/ other
    machine could compute the input machine's halting semantics.

    Have you read Kaz's response to my post?  That explains why for any
    given machine, there is always some other machine that computes the
    halting status of that machine.  Basically there are only two possible behaviours: halts or neverhalts.  We just need two machines H1 and H0
    that straight away return halts/neverhalts respectively.  For any
    machine M, either H1 or H0 correctly computes M's halting status, so assuming normal terminology use, any single M is decideable.  (And by extension, halting for any finite set of machines is decidable.)

    Sometimes people attempt to come up with reasons why H1 and H0 don't count.  That was certainly PO's response, and his explanation was that
    H1 and H0 are disqualified as halt deciders because they "aren't even trying".  He has never explained what it means for a TM to "not really
    try" to do something; of course, TMs are just what they are, without "trying" to do anything.  We're not talking about an olympic sport where there are points awarded for effort/artistic interpretation etc., it's
    all just "whether they work".

    [Also, people like PO often confuse what the halting problem says,
    believing that it is implying that there is some machine M which "cannot
    be decided".  That's a misunderstanding...]

    Anyhow, all of that is completely missing the point of the halting
    problem - that is to find /one/ machine H that can decide /any/ input M_desc.  Finding a machine that can decide one specific input is trivial.


    Mike.

    mike, there's two responses to this

    a) you can construct halting paradoxes that contradicts multiple and
    possibly even infinite deciders. certainly any finite set, after which
    it becomes cat and mouse: if you define a new decider, i can add to my
    growing multi-paradox that includes it.

    homework assignment for the group: write a multi-decider paradox that confounds both H1 and H0

    b) turing's original semantic paradox ("satisfactory" circle-free vs
    circular computable number) cannot be solved by a secondary decider on
    the matter. the halting problem is fundamentally a simple form of
    semantic paradox than the "satisfactory" problem.

    afaik, other than me, no one i've read is trying to address semantic
    paradoxes is targeting turing's original form.

    i should probably write a post on that too
    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 20 02:29:53 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
    a) you can construct halting paradoxes that contradicts multiple and possibly even infinite deciders. certainly any finite set, after which

    This is not possible in general. The diagonal test case must make
    exactly one decision and then behave in a contradictory way: halt or
    not. If it interrogates as few as two deciders, it becomes intractable
    if their decisions differ: to contradict one is to agree with the other.

    If the deciders are H0(P) { return 0; } and H1(P) { return 1; } you can
    see that between the two of them, they cover the entire space: there
    cannot be a signal case whch both of these don't get right. One
    correctly decides all nonterminating cases; the other correctly decies
    all terminating cases, and every case is one or the other.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 18:49:50 2025
    From Newsgroup: comp.ai.philosophy

    On 11/19/25 6:29 PM, Kaz Kylheku wrote:
    On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
    a) you can construct halting paradoxes that contradicts multiple and
    possibly even infinite deciders. certainly any finite set, after which

    This is not possible in general. The diagonal test case must make
    exactly one decision and then behave in a contradictory way: halt or
    not. If it interrogates as few as two deciders, it becomes intractable
    if their decisions differ: to contradict one is to agree with the other.

    If the deciders are H0(P) { return 0; } and H1(P) { return 1; } you can
    see that between the two of them, they cover the entire space: there
    cannot be a signal case whch both of these don't get right. One
    correctly decides all nonterminating cases; the other correctly decies
    all terminating cases, and every case is one or the other.

    common man those deciders do not provide an /effectively computable/
    interface and you know it

    try again, it's quite simple to produce a paradox that confounds two legitimate deciders that genuinely never give a wrong answer
    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 20 02:58:35 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/19/25 6:29 PM, Kaz Kylheku wrote:
    On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
    a) you can construct halting paradoxes that contradicts multiple and
    possibly even infinite deciders. certainly any finite set, after which

    This is not possible in general. The diagonal test case must make
    exactly one decision and then behave in a contradictory way: halt or
    not. If it interrogates as few as two deciders, it becomes intractable
    if their decisions differ: to contradict one is to agree with the other.

    If the deciders are H0(P) { return 0; } and H1(P) { return 1; } you can
    see that between the two of them, they cover the entire space: there
    cannot be a signal case whch both of these don't get right. One
    correctly decides all nonterminating cases; the other correctly decies
    all terminating cases, and every case is one or the other.

    common man those deciders do not provide an /effectively computable/ interface and you know it

    try again, it's quite simple to produce a paradox that confounds two legitimate deciders that genuinely never give a wrong answer

    But we have a proof that deciders which never give a wrong answer do not
    exist.

    If halting algorithms existed

    - they would all agree with each other and thus look the same from the
    ouside and so wouldn't constitute a multi-decider aggregate.

    - it would not be /possible/ to contradict them: they never give
    a wrong answer!

    So if we want to develop diagonal cases whch contradict deciders,
    we have to accept that we are targeting imperfect, partial deciders
    (by doing so, showing them to be that way).
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 21:12:21 2025
    From Newsgroup: comp.ai.philosophy

    On 11/19/2025 3:41 PM, Kaz Kylheku wrote:
    On 2025-11-19, olcott <polcott333@gmail.com> wrote:
    The sound basis of this reasoning is the
    semantics of the C programming language.

    ... and, note,
    that you dishonestly erased most of the context
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 19:53:42 2025
    From Newsgroup: comp.ai.philosophy

    On 11/19/25 6:58 PM, Kaz Kylheku wrote:
    On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/19/25 6:29 PM, Kaz Kylheku wrote:
    On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
    a) you can construct halting paradoxes that contradicts multiple and
    possibly even infinite deciders. certainly any finite set, after which

    This is not possible in general. The diagonal test case must make
    exactly one decision and then behave in a contradictory way: halt or
    not. If it interrogates as few as two deciders, it becomes intractable
    if their decisions differ: to contradict one is to agree with the other. >>>
    If the deciders are H0(P) { return 0; } and H1(P) { return 1; } you can
    see that between the two of them, they cover the entire space: there
    cannot be a signal case whch both of these don't get right. One
    correctly decides all nonterminating cases; the other correctly decies
    all terminating cases, and every case is one or the other.

    common man those deciders do not provide an /effectively computable/
    interface and you know it

    try again, it's quite simple to produce a paradox that confounds two
    legitimate deciders that genuinely never give a wrong answer

    But we have a proof that deciders which never give a wrong answer do not exist.

    If halting algorithms existed

    - they would all agree with each other and thus look the same from the
    ouside and so wouldn't constitute a multi-decider aggregate.

    - it would not be /possible/ to contradict them: they never give
    a wrong answer!

    So if we want to develop diagonal cases whch contradict deciders,
    we have to accept that we are targeting imperfect, partial deciders
    (by doing so, showing them to be that way).

    for the sake of proof/example assume they are honest until you produce
    the paradox

    this isn't hard, it's just adding half a line of code to the original
    paradox
    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 20 04:42:48 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-11-20, olcott <polcott333@gmail.com> wrote:
    On 11/19/2025 3:41 PM, Kaz Kylheku wrote:
    On 2025-11-19, olcott <polcott333@gmail.com> wrote:
    The sound basis of this reasoning is the
    semantics of the C programming language.

    ... and, note,
    that you dishonestly erased most of the context

    That's just the same pseudo-code snppet you've posted
    hundreds of times.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 22:57:20 2025
    From Newsgroup: comp.ai.philosophy

    On 11/19/2025 10:42 PM, Kaz Kylheku wrote:
    On 2025-11-20, olcott <polcott333@gmail.com> wrote:
    On 11/19/2025 3:41 PM, Kaz Kylheku wrote:
    On 2025-11-19, olcott <polcott333@gmail.com> wrote:
    The sound basis of this reasoning is the
    semantics of the C programming language.

    ... and, note,
    that you dishonestly erased most of the context

    That's just the same pseudo-code snppet you've posted
    hundreds of times.


    The idea is that I will keep repeating this
    until you pay attention

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    int main()
    {
    HHH(DD);
    }

    HHH simulates DD that calls HHH(DD)
    that simulates DD that calls HHH(DD)...

    HHH1 simulates DD that calls HHH(DD) that
    returns to DD that returns to HHH1.

    The behavior of DD simulated by HHH1 is the
    same as the behavior of DD() executed from main.

    The sound basis of this reasoning is the
    semantics of the C programming language.
    --
    Copyright 2025 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning" computable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Wed Nov 19 21:01:34 2025
    From Newsgroup: comp.ai.philosophy

    On 11/19/25 11:47 AM, Kaz Kylheku wrote:
    On 2025-11-19, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/19/25 10:48 AM, Kaz Kylheku wrote:
    On 2025-11-19, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/19/25 9:17 AM, Tristan Wibberley wrote:
    On 19/11/2025 01:40, dart200 wrote:

    i'm currently a bit stumped on dealing with a possible a halting paradox >>>>>> constructed within RTMs, using an RTM simulating a TM simulating an RTM. >>>>>> this chain similarly mechanically cuts off the required information to >>>>>> avoid a paradox, kinda like a TM alone. not fully confident it's a >>>>>> problem or not

    It sounds equivalent to problems of security wrt. leaky sandboxes.
    Interesting stuff. Maybe valuable too.

    i'm actually pretty distraught over this rn. who's gunna care if all i >>>> did was reframe the halting problem?? i'm stuck on quite literally a
    liar's paradox, with emphasis on a clear lie taking place

    specifically: the simulated TM simulating an RTM is lying about the true >>>> runtime context, bamboozling reflection's ability to prevent paradox
    construction

    Don't you have mechanisms to prevent the procedures from being
    able to manipulate the environment?

    und = () -> {
    simTM {
    if ( simRTM{halts(und)} )
    loop_forever()
    else
    return
    }
    }

    So in ths above construction, simTM creates a contour around a new
    context, which is empty?

    essentially yes. simTM does not support REFLECT, so simulations within
    the simulation have no method of accessing the runtime context, creating
    the illusion (or lie) of an null context

    In a computational system with context, functions do not have a halting status that depends only on their arguments, but on their arguments plus context.

    Therefore, the question "does this function halt when applied to these arguments" isn't right in this domain; it needs to be "does this function,
    in a context with such and such content, and these arguments, halt".

    Then, to have a diagonal case whch opposes the decider, that diagonal
    case has to be sure to be using that same context, otherwise it
    is not diagonal; i.e.

    in_context C { // <-- but but construct is banned!

    // D, in context C "behaves opposite" to the decision
    // produced by H regarding D in context C:

    D() {
    if (H(D, C))
    loop();
    }
    }

    if we can find a way to surely prevent that erasure from being
    expressible, then we can eliminate the halting paradox

    idk if that's possible anymore,

    but we may be able to isolate that paradox into a set of machines that contains nothing uniquely computable (remember for any particular
    computable number, there are an infinite machines that compute said
    number), and therefore can be safely ignored as uninteresting

    or maybe there's some mechanism i haven't thought of yet...


    Or:

    D() {
    let C = getParentContext(); // likewise banned?

    if (H(D, C))
    loop();
    }




    nothing wrong here, i think...

    passing in the context C you'd like to compute Ds halting semantics in
    regards to is fine. since H still has access to the full context, it can correctly discern where it is in the computation and respond with false
    (does not halt OR undecidable) on line "if (H(D,C))", and true anywhere
    else to that particular input

    the problem arises when you erase the context via a liar's simulation.
    it must be done via a simulation since reflection is baked into the fundamental mechanisms available to every computation via REFLECT, and
    cannot be erased other than a lying simulation.
    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 20 07:22:33 2025
    From Newsgroup: comp.ai.philosophy

    On 20/11/2025 02:29, Kaz Kylheku wrote:
    On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
    a) you can construct halting paradoxes that contradicts multiple and
    possibly even infinite deciders. certainly any finite set, after which

    This is not possible in general. The diagonal test case must make
    exactly one decision and then behave in a contradictory way: halt or
    not. If it interrogates as few as two deciders, it becomes intractable
    if their decisions differ: to contradict one is to agree with the other.

    Their decisions (evaluations) don't differ when they both exist.


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 20 19:55:55 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/19/25 6:58 PM, Kaz Kylheku wrote:
    On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/19/25 6:29 PM, Kaz Kylheku wrote:
    On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
    a) you can construct halting paradoxes that contradicts multiple and >>>>> possibly even infinite deciders. certainly any finite set, after which >>>>
    This is not possible in general. The diagonal test case must make
    exactly one decision and then behave in a contradictory way: halt or
    not. If it interrogates as few as two deciders, it becomes intractable >>>> if their decisions differ: to contradict one is to agree with the other. >>>>
    If the deciders are H0(P) { return 0; } and H1(P) { return 1; } you can >>>> see that between the two of them, they cover the entire space: there
    cannot be a signal case whch both of these don't get right. One
    correctly decides all nonterminating cases; the other correctly decies >>>> all terminating cases, and every case is one or the other.

    common man those deciders do not provide an /effectively computable/
    interface and you know it

    try again, it's quite simple to produce a paradox that confounds two
    legitimate deciders that genuinely never give a wrong answer

    But we have a proof that deciders which never give a wrong answer do not
    exist.

    If halting algorithms existed

    - they would all agree with each other and thus look the same from the
    ouside and so wouldn't constitute a multi-decider aggregate.

    - it would not be /possible/ to contradict them: they never give
    a wrong answer!

    So if we want to develop diagonal cases whch contradict deciders,
    we have to accept that we are targeting imperfect, partial deciders
    (by doing so, showing them to be that way).

    for the sake of proof/example assume they are honest until you produce
    the paradox

    By "until", are you referring to some temporal concept? There is a time variable in the system such that a decider can be introduced, and then
    for at time, there exists no diagonal (or other) case until someone
    writes it?
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 20 12:03:05 2025
    From Newsgroup: comp.ai.philosophy

    On 11/20/25 11:55 AM, Kaz Kylheku wrote:
    On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/19/25 6:58 PM, Kaz Kylheku wrote:
    On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/19/25 6:29 PM, Kaz Kylheku wrote:
    On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
    a) you can construct halting paradoxes that contradicts multiple and >>>>>> possibly even infinite deciders. certainly any finite set, after which >>>>>
    This is not possible in general. The diagonal test case must make
    exactly one decision and then behave in a contradictory way: halt or >>>>> not. If it interrogates as few as two deciders, it becomes intractable >>>>> if their decisions differ: to contradict one is to agree with the other. >>>>>
    If the deciders are H0(P) { return 0; } and H1(P) { return 1; } you can >>>>> see that between the two of them, they cover the entire space: there >>>>> cannot be a signal case whch both of these don't get right. One
    correctly decides all nonterminating cases; the other correctly decies >>>>> all terminating cases, and every case is one or the other.

    common man those deciders do not provide an /effectively computable/
    interface and you know it

    try again, it's quite simple to produce a paradox that confounds two
    legitimate deciders that genuinely never give a wrong answer

    But we have a proof that deciders which never give a wrong answer do not >>> exist.

    If halting algorithms existed

    - they would all agree with each other and thus look the same from the
    ouside and so wouldn't constitute a multi-decider aggregate.

    - it would not be /possible/ to contradict them: they never give
    a wrong answer!

    So if we want to develop diagonal cases whch contradict deciders,
    we have to accept that we are targeting imperfect, partial deciders
    (by doing so, showing them to be that way).

    for the sake of proof/example assume they are honest until you produce
    the paradox

    By "until", are you referring to some temporal concept? There is a time variable in the system such that a decider can be introduced, and then
    for at time, there exists no diagonal (or other) case until someone
    writes it?


    assume the premise exist and show a contradiction for both deciders
    within one machine
    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 20 20:14:52 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/20/25 11:55 AM, Kaz Kylheku wrote:
    On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/19/25 6:58 PM, Kaz Kylheku wrote:
    On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/19/25 6:29 PM, Kaz Kylheku wrote:
    On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
    a) you can construct halting paradoxes that contradicts multiple and >>>>>>> possibly even infinite deciders. certainly any finite set, after which >>>>>>
    This is not possible in general. The diagonal test case must make
    exactly one decision and then behave in a contradictory way: halt or >>>>>> not. If it interrogates as few as two deciders, it becomes intractable >>>>>> if their decisions differ: to contradict one is to agree with the other. >>>>>>
    If the deciders are H0(P) { return 0; } and H1(P) { return 1; } you can >>>>>> see that between the two of them, they cover the entire space: there >>>>>> cannot be a signal case whch both of these don't get right. One
    correctly decides all nonterminating cases; the other correctly decies >>>>>> all terminating cases, and every case is one or the other.

    common man those deciders do not provide an /effectively computable/ >>>>> interface and you know it

    try again, it's quite simple to produce a paradox that confounds two >>>>> legitimate deciders that genuinely never give a wrong answer

    But we have a proof that deciders which never give a wrong answer do not >>>> exist.

    If halting algorithms existed

    - they would all agree with each other and thus look the same from the >>>> ouside and so wouldn't constitute a multi-decider aggregate.

    - it would not be /possible/ to contradict them: they never give
    a wrong answer!

    So if we want to develop diagonal cases whch contradict deciders,
    we have to accept that we are targeting imperfect, partial deciders
    (by doing so, showing them to be that way).

    for the sake of proof/example assume they are honest until you produce
    the paradox

    By "until", are you referring to some temporal concept? There is a time
    variable in the system such that a decider can be introduced, and then
    for at time, there exists no diagonal (or other) case until someone
    writes it?


    assume the premise exist and show a contradiction for both deciders
    within one machine

    If one decider says true, and the other false, for the contradicting
    case, then no can do. Infinitely looping, or terminating, contradicts
    only one of the decider, agreeing with the other. There isn't a third
    choice of behavior.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 20 20:24:10 2025
    From Newsgroup: comp.ai.philosophy

    On 20/11/2025 20:14, Kaz Kylheku wrote:
    On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/20/25 11:55 AM, Kaz Kylheku wrote:
    On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/19/25 6:58 PM, Kaz Kylheku wrote:
    On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/19/25 6:29 PM, Kaz Kylheku wrote:
    On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>> a) you can construct halting paradoxes that contradicts multiple and >>>>>>>> possibly even infinite deciders. certainly any finite set, after which >>>>>>>
    This is not possible in general. The diagonal test case must make >>>>>>> exactly one decision and then behave in a contradictory way: halt or >>>>>>> not. If it interrogates as few as two deciders, it becomes intractable >>>>>>> if their decisions differ: to contradict one is to agree with the other.

    If the deciders are H0(P) { return 0; } and H1(P) { return 1; } you can >>>>>>> see that between the two of them, they cover the entire space: there >>>>>>> cannot be a signal case whch both of these don't get right. One
    correctly decides all nonterminating cases; the other correctly decies >>>>>>> all terminating cases, and every case is one or the other.

    common man those deciders do not provide an /effectively computable/ >>>>>> interface and you know it

    try again, it's quite simple to produce a paradox that confounds two >>>>>> legitimate deciders that genuinely never give a wrong answer

    But we have a proof that deciders which never give a wrong answer do not >>>>> exist.

    If halting algorithms existed

    - they would all agree with each other and thus look the same from the >>>>> ouside and so wouldn't constitute a multi-decider aggregate.

    - it would not be /possible/ to contradict them: they never give
    a wrong answer!

    So if we want to develop diagonal cases whch contradict deciders,
    we have to accept that we are targeting imperfect, partial deciders
    (by doing so, showing them to be that way).

    for the sake of proof/example assume they are honest until you produce >>>> the paradox

    By "until", are you referring to some temporal concept? There is a time
    variable in the system such that a decider can be introduced, and then
    for at time, there exists no diagonal (or other) case until someone
    writes it?


    assume the premise exist and show a contradiction for both deciders
    within one machine

    If one decider says true, and the other false, for the contradicting
    case, then no can do. Infinitely looping, or terminating, contradicts
    only one of the decider, agreeing with the other. There isn't a third
    choice of behavior.


    oh I see, not that the thwarted deciders go different ways, but the raw deciders go different ways before thwarting!

    well, the results of the deciders are combined to form a single decider,
    if that is (const . first) it's just equal to the first decider, if it's
    (const . second) it's just equal to the second decider, if it's 'or' or
    and then it's a bit complicated. There's a whole thing about computing
    the conjunction of all deciders that approaches the universal decision
    as you approach the universal inclusion but how long does the approach
    take... well, we can make an intuitive leap based on what we know about halting.
    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 20 13:22:18 2025
    From Newsgroup: comp.ai.philosophy

    On 11/19/2025 8:57 PM, olcott wrote:
    On 11/19/2025 10:42 PM, Kaz Kylheku wrote:
    On 2025-11-20, olcott <polcott333@gmail.com> wrote:
    On 11/19/2025 3:41 PM, Kaz Kylheku wrote:
    On 2025-11-19, olcott <polcott333@gmail.com> wrote:
    The sound basis of this reasoning is the
    semantics of the C programming language.

    ... and, note,
    that you dishonestly erased most of the context

    That's just the same pseudo-code snppet you've posted
    hundreds of times.


    The idea is that I will keep repeating this
    until you pay attention

    [...]

    I don't even know if you know when you will halt?

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 20 22:10:07 2025
    From Newsgroup: comp.ai.philosophy

    On 2025-11-20, olcott <polcott333@gmail.com> wrote:
    On 11/19/2025 10:42 PM, Kaz Kylheku wrote:
    On 2025-11-20, olcott <polcott333@gmail.com> wrote:
    On 11/19/2025 3:41 PM, Kaz Kylheku wrote:
    On 2025-11-19, olcott <polcott333@gmail.com> wrote:
    The sound basis of this reasoning is the
    semantics of the C programming language.

    ... and, note,
    that you dishonestly erased most of the context

    That's just the same pseudo-code snppet you've posted
    hundreds of times.


    The idea is that I will keep repeating this
    until you pay attention

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    int main()
    {
    HHH(DD);
    }


    I've given ths an incredible amount of attention.

    HHH simulates DD that calls HHH(DD)
    that simulates DD that calls HHH(DD)...

    If HHH(DD) returns 0, it's this;

    HHH simulates DD that calls HHH(DD)
    - that simulates DD that calls HHH(DD)...
    - that simulates DD that calls HHH(DD)...
    - but only partially, returning 0.
    - such that DD terminates.
    - but only partially, returning 0.
    - such that DD terminates.

    Adding another level:

    HHH simulates DD that calls HHH(DD)
    - that simulates DD that calls HHH(DD)...
    - that simulates DD that calls HHH(DD)...
    - that simulates DD that calls HHH(DD)...
    - that ...
    - that ...
    - that ...
    - but only partially, returning 0.
    - such that DD terminates.
    - but only partially, returning 0.
    - such that DD terminates.
    - but only partially, returning 0.
    - such that DD terminates.

    Infinite simulation tower: finite DD's.

    Since you don't grok this but I do, obviously the one who has
    paid more attention is me.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 20 14:56:01 2025
    From Newsgroup: comp.ai.philosophy

    On 11/20/2025 2:10 PM, Kaz Kylheku wrote:
    On 2025-11-20, olcott <polcott333@gmail.com> wrote:
    On 11/19/2025 10:42 PM, Kaz Kylheku wrote:
    On 2025-11-20, olcott <polcott333@gmail.com> wrote:
    On 11/19/2025 3:41 PM, Kaz Kylheku wrote:
    On 2025-11-19, olcott <polcott333@gmail.com> wrote:
    The sound basis of this reasoning is the
    semantics of the C programming language.

    ... and, note,
    that you dishonestly erased most of the context

    That's just the same pseudo-code snppet you've posted
    hundreds of times.


    The idea is that I will keep repeating this
    until you pay attention

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    int main()
    {
    HHH(DD);
    }


    I've given ths an incredible amount of attention.

    HHH simulates DD that calls HHH(DD)
    that simulates DD that calls HHH(DD)...

    If HHH(DD) returns 0, it's this;

    HHH simulates DD that calls HHH(DD)
    - that simulates DD that calls HHH(DD)...
    - that simulates DD that calls HHH(DD)...
    - but only partially, returning 0.
    - such that DD terminates.
    - but only partially, returning 0.
    - such that DD terminates.

    Adding another level:

    HHH simulates DD that calls HHH(DD)
    - that simulates DD that calls HHH(DD)...
    - that simulates DD that calls HHH(DD)...
    - that simulates DD that calls HHH(DD)...
    - that ...
    - that ...
    - that ...
    - but only partially, returning 0.
    - such that DD terminates.
    - but only partially, returning 0.
    - such that DD terminates.
    - but only partially, returning 0.
    - such that DD terminates.

    Infinite simulation tower: finite DD's.

    Since you don't grok this but I do, obviously the one who has
    paid more attention is me.


    Agreed! :^)
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory,sci.logic,comp.ai.philosophy,sci.math on Thu Nov 20 18:10:23 2025
    From Newsgroup: comp.ai.philosophy

    On 11/18/2025 3:47 PM, Mike Terry wrote:
    On 18/11/2025 03:10, dart200 wrote:
    On 11/17/25 7:07 PM, Kaz Kylheku wrote:
    On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
    On 11/17/25 4:31 PM, olcott wrote:
    On 11/17/2025 6:06 PM, dart200 wrote:
    On 11/17/25 3:35 PM, olcott wrote:
    The halting problem is requiring deciders to
    compute information that is not contained in
    their input.

    ur agreeing with turing and the halting problem:

    one cannot compute whether a machine halts or not from the string
    describing the machine


    That the halting problem limits computation
    is like this very extreme example:

    Predict who the next president of the United States
    will be entirely on the basis of √2 (square root of 2).
    That cannot be derived from the input.

    bruh, ur agreeing with the halting problem:

    one cannot take the string describing the machine, and use it to
    compute
    whether the machine described halts

    But that isn't true; you certainly can do that. Just not using one
    unified algorithm that works for absolutely all such strings.

    When it /does/ work, it's certainly not based on any input other than
    the string.

    yes i meant generally

    you also can't compute generally whether you can or cannot compute
    whether a an machine description halts or not

    What does that mean though?

    It sounds like you're asking for a /single/ TM that given /any/ machine description D, must compute "whether or not D's halting is computable". [And saying no such single TM exists?]

    The problem is in the phrase within quotes.  Surely that phrase means "whether or not there exists a TM that computes whether the given D
    halts or not"?  If not, what does it mean?



    The All is the All, take the fact that any machine can have a
    specialized decider, to infinity and beyond.... Its all? How many did it
    mis, none. Ahh, a specialized decider is just a finite instance.
    --- Synchronet 3.21a-Linux NewsLink 1.2