On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string
describing the machine
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string
describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string
describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string
describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to compute whether the machine described halts
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string
describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to compute
whether the machine described halts
But that isn't true; you certainly can do that. Just not using one
unified algorithm that works for absolutely all such strings.
When it /does/ work, it's certainly not based on any input other than
the string.
On 11/17/25 7:07 PM, Kaz Kylheku wrote:
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string
describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to compute >>> whether the machine described halts
But that isn't true; you certainly can do that. Just not using one
unified algorithm that works for absolutely all such strings.
When it /does/ work, it's certainly not based on any input other than
the string.
yes i meant generally
you also can't compute generally whether you can or cannot compute
whether a an machine description halts or not
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string
describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to compute whether the machine described halts
the only difference between ur claim here and the proofs is the why
On 11/17/25 7:07 PM, Kaz Kylheku wrote:
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string
describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to compute >>> whether the machine described halts
But that isn't true; you certainly can do that. Just not using one
unified algorithm that works for absolutely all such strings.
When it /does/ work, it's certainly not based on any input other than
the string.
yes i meant generally
On 11/17/25 7:07 PM, Kaz Kylheku wrote:
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string
describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to compute >>> whether the machine described halts
But that isn't true; you certainly can do that. Just not using one
unified algorithm that works for absolutely all such strings.
When it /does/ work, it's certainly not based on any input other than
the string.
yes i meant generally
you also can't compute generally whether you can or cannot compute
whether a an machine description halts or not
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 7:07 PM, Kaz Kylheku wrote:
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string
describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to compute >>>> whether the machine described halts
But that isn't true; you certainly can do that. Just not using one
unified algorithm that works for absolutely all such strings.
When it /does/ work, it's certainly not based on any input other than
the string.
yes i meant generally
But polocott means something else. He keeps insisting (without any
rational justification) that the conventional halting problem,
when "H" is presented with the diagonal "D" case, is asking
"H" to decide something which is not the finite string input.
He believes that D literally calls the same instance of H in
the same program image, which is the only way it can be,
and thus D is H's caller. And thus H is being asked to decide
about its caller. But the caller is not the parameter D, but
an activated procedure. Therefore H is being asked to decide something
about an activated procedure and not its finite string parameter.
On 11/17/2025 7:10 PM, dart200 wrote:
On 11/17/25 7:07 PM, Kaz Kylheku wrote:
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string
describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to
compute
whether the machine described halts
But that isn't true; you certainly can do that. Just not using one
unified algorithm that works for absolutely all such strings.
When it /does/ work, it's certainly not based on any input other than
the string.
yes i meant generally
you also can't compute generally whether you can or cannot compute
whether a an machine description halts or not
Didn't you suggest you have a solution to the halting problem using reflection?
On 11/17/25 7:36 PM, Chris M. Thomasson wrote:
On 11/17/2025 7:10 PM, dart200 wrote:
On 11/17/25 7:07 PM, Kaz Kylheku wrote:
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string >>>>>>> describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to
compute
whether the machine described halts
But that isn't true; you certainly can do that. Just not using one
unified algorithm that works for absolutely all such strings.
When it /does/ work, it's certainly not based on any input other than
the string.
yes i meant generally
you also can't compute generally whether you can or cannot compute
whether a an machine description halts or not
Didn't you suggest you have a solution to the halting problem using
reflection?
yes, i was speaking to the consensus understanding in what you've quoted
On 11/17/25 7:07 PM, Kaz Kylheku wrote:
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string
describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to compute >>> whether the machine described halts
But that isn't true; you certainly can do that. Just not using one
unified algorithm that works for absolutely all such strings.
When it /does/ work, it's certainly not based on any input other than
the string.
yes i meant generally
you also can't compute generally whether you can or cannot compute whether a an machine description
halts or not
On 18/11/2025 03:10, dart200 wrote:
yes i meant generally
you also can't compute generally whether you can or cannot compute whether a an machine description
halts or not
What does that mean though?
It sounds like you're asking for a /single/ TM that given /any/ machine description D, must compute
"whether or not D's halting is computable". [And saying no such single TM exists?]
On 18/11/2025 03:10, dart200 wrote:
On 11/17/25 7:07 PM, Kaz Kylheku wrote:
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string
describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to
compute
whether the machine described halts
But that isn't true; you certainly can do that. Just not using one
unified algorithm that works for absolutely all such strings.
When it /does/ work, it's certainly not based on any input other than
the string.
yes i meant generally
you also can't compute generally whether you can or cannot compute
whether a an machine description halts or not
What does that mean though?
It sounds like you're asking for a /single/ TM that given /any/ machine description D, must compute "whether or not D's halting is computable". [And saying no such single TM exists?]
The problem is in the phrase within quotes. Surely that phrase means "whether or not there exists a TM that computes whether the given D
halts or not"? If not, what does it mean?
Mike.
On 2025-11-18, Mike Terry <news.dead.person.stones@darjeeling.plus.com> wrote:
On 18/11/2025 03:10, dart200 wrote:
yes i meant generally
you also can't compute generally whether you can or cannot compute whether a an machine description
halts or not
What does that mean though?
It sounds like you're asking for a /single/ TM that given /any/ machine description D, must compute
"whether or not D's halting is computable". [And saying no such single TM exists?]
Since the halting of any machine /is/ individually computable, then that appears false. We can compute whether it is computable whether a single, given machine halts. We can compute that with the word "True".
for_all (M) : is_halting_computable(M) = T
For every machine, halting is computable --- just not by
an algorithm that also works for all other machines since there
is no such thing.
Another way to look at an existential rephrasing:
not ( some (M) : is_halting_computable(M) = F )
It is false that there exist machines whose halting is
individually incomputable.
That has been specific misconception that Olcott labored under for
many years. He showed clear signs of believing that the D template
program is /one/ function which is not decidable by any H.
It's not clear if he has been fully disabused of this notion,
years of unbridled abuse notwithstanding.
On 11/17/2025 9:18 PM, dart200 wrote:
On 11/17/25 7:36 PM, Chris M. Thomasson wrote:
On 11/17/2025 7:10 PM, dart200 wrote:
On 11/17/25 7:07 PM, Kaz Kylheku wrote:
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string >>>>>>>> describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to
compute
whether the machine described halts
But that isn't true; you certainly can do that. Just not using one
unified algorithm that works for absolutely all such strings.
When it /does/ work, it's certainly not based on any input other than >>>>> the string.
yes i meant generally
you also can't compute generally whether you can or cannot compute
whether a an machine description halts or not
Didn't you suggest you have a solution to the halting problem using
reflection?
yes, i was speaking to the consensus understanding in what you've quoted
Okay. Well, 100% per-path coverage is one way we can say that DD halts
_and_ does not halt. I made the fuzzer for Olcotts DD for fun. Its NOT a solution to the halting problem. Actually, he raised some red flags in
my mind when he tried to tell me that BASIC cannot handle recursion...
Programming BASIC brings back memories of when I was a little kid.
Actually, you should be able to mock up your reflection system. Have you made any headway?
On 11/18/25 3:10 PM, Chris M. Thomasson wrote:
On 11/17/2025 9:18 PM, dart200 wrote:
On 11/17/25 7:36 PM, Chris M. Thomasson wrote:
On 11/17/2025 7:10 PM, dart200 wrote:
On 11/17/25 7:07 PM, Kaz Kylheku wrote:
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string >>>>>>>>> describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to >>>>>>> compute
whether the machine described halts
But that isn't true; you certainly can do that. Just not using one >>>>>> unified algorithm that works for absolutely all such strings.
When it /does/ work, it's certainly not based on any input other than >>>>>> the string.
yes i meant generally
you also can't compute generally whether you can or cannot compute
whether a an machine description halts or not
Didn't you suggest you have a solution to the halting problem using
reflection?
yes, i was speaking to the consensus understanding in what you've quoted >>>
Okay. Well, 100% per-path coverage is one way we can say that DD halts
_and_ does not halt. I made the fuzzer for Olcotts DD for fun. Its NOT
a solution to the halting problem. Actually, he raised some red flags
in my mind when he tried to tell me that BASIC cannot handle recursion...
depends on which BASIC tho, eh?
Programming BASIC brings back memories of when I was a little kid.
Actually, you should be able to mock up your reflection system. Have
you made any headway?
not at all
i'm working on the logical consistency of the theory, which is going to
be far simpler than actual implementation
i'm currently a bit stumped on dealing with a possible a halting paradox constructed within RTMs, using an RTM simulating a TM simulating an RTM. this chain similarly mechanically cuts off the required information to
avoid a paradox, kinda like a TM alone. not fully confident it's a
problem or not
i may write a post on it
i'm currently a bit stumped on dealing with a possible a halting paradox constructed within RTMs, using an RTM simulating a TM simulating an RTM.
this chain similarly mechanically cuts off the required information to
avoid a paradox, kinda like a TM alone. not fully confident it's a
problem or not
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 7:07 PM, Kaz Kylheku wrote:
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string
describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to compute >>>> whether the machine described halts
But that isn't true; you certainly can do that. Just not using one
unified algorithm that works for absolutely all such strings.
When it /does/ work, it's certainly not based on any input other than
the string.
yes i meant generally
But polocott means something else. He keeps insisting (without any
rational justification) that the conventional halting problem,
when "H" is presented with the diagonal "D" case, is asking
"H" to decide something which is not the finite string input.
On 18/11/2025 03:10, dart200 wrote:
On 11/17/25 7:07 PM, Kaz Kylheku wrote:
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string
describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to
compute
whether the machine described halts
But that isn't true; you certainly can do that. Just not using one
unified algorithm that works for absolutely all such strings.
When it /does/ work, it's certainly not based on any input other than
the string.
yes i meant generally
you also can't compute generally whether you can or cannot compute
whether a an machine description halts or not
What does that mean though?
It sounds like you're asking for a /single/ TM that given /any/ machine description D, must compute "whether or not D's halting is computable". [And saying no such single TM exists?]
The problem is in the phrase within quotes. Surely that phrase means "whether or not there exists a TM that computes whether the given D
halts or not"? If not, what does it mean?
Mike.
On 18/11/2025 03:45, Kaz Kylheku wrote:
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 7:07 PM, Kaz Kylheku wrote:
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string >>>>>>> describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to compute >>>>> whether the machine described halts
But that isn't true; you certainly can do that. Just not using one
unified algorithm that works for absolutely all such strings.
When it /does/ work, it's certainly not based on any input other than
the string.
yes i meant generally
But polocott means something else. He keeps insisting (without any
rational justification) that the conventional halting problem,
when "H" is presented with the diagonal "D" case, is asking
"H" to decide something which is not the finite string input.
Some things to consider in evaluating Olcott's inability to analyse his doubts:
(1) The halting problem *as described to him*
(2)
(i) If H(P) is the recursion, then the nonobviousness of the constructibility of a copy of the original program text P from a
contractum of the program text
(ii) The nonobviousness or impermissibility (presumed or otherwise) of
the equality H(P') = H(P) where P' is some contractum of P.
--
Tristan Wibberley
The message body is Copyright (C) 2025 Tristan Wibberley except
citations and quotations noted. All Rights Reserved except that you may,
of course, cite it academically giving credit to me, distribute it
verbatim as part of a usenet system or its archives, and use it to
promote my greatness and general superiority without misrepresentation
of my opinions other than my opinion of my greatness and general
superiority which you _may_ misrepresent. You definitely MAY NOT train
any production AI system with it but you may train experimental AI that
will only be used for evaluation of the AI methods it implements.
On 19/11/2025 01:40, dart200 wrote:
i'm currently a bit stumped on dealing with a possible a halting paradox
constructed within RTMs, using an RTM simulating a TM simulating an RTM.
this chain similarly mechanically cuts off the required information to
avoid a paradox, kinda like a TM alone. not fully confident it's a
problem or not
It sounds equivalent to problems of security wrt. leaky sandboxes. Interesting stuff. Maybe valuable too.
On 11/19/25 9:17 AM, Tristan Wibberley wrote:
On 19/11/2025 01:40, dart200 wrote:
i'm currently a bit stumped on dealing with a possible a halting paradox >>> constructed within RTMs, using an RTM simulating a TM simulating an RTM. >>> this chain similarly mechanically cuts off the required information to
avoid a paradox, kinda like a TM alone. not fully confident it's a
problem or not
It sounds equivalent to problems of security wrt. leaky sandboxes.
Interesting stuff. Maybe valuable too.
i'm actually pretty distraught over this rn. who's gunna care if all i
did was reframe the halting problem?? i'm stuck on quite literally a
liar's paradox, with emphasis on a clear lie taking place
specifically: the simulated TM simulating an RTM is lying about the true runtime context, bamboozling reflection's ability to prevent paradox construction
und = () -> {
simTM {
if ( simRTM{halts(und)} )
loop_forever()
else
return
}
}
On 2025-11-19, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/19/25 9:17 AM, Tristan Wibberley wrote:
On 19/11/2025 01:40, dart200 wrote:
i'm currently a bit stumped on dealing with a possible a halting paradox >>>> constructed within RTMs, using an RTM simulating a TM simulating an RTM. >>>> this chain similarly mechanically cuts off the required information to >>>> avoid a paradox, kinda like a TM alone. not fully confident it's a
problem or not
It sounds equivalent to problems of security wrt. leaky sandboxes.
Interesting stuff. Maybe valuable too.
i'm actually pretty distraught over this rn. who's gunna care if all i
did was reframe the halting problem?? i'm stuck on quite literally a
liar's paradox, with emphasis on a clear lie taking place
specifically: the simulated TM simulating an RTM is lying about the true
runtime context, bamboozling reflection's ability to prevent paradox
construction
Don't you have mechanisms to prevent the procedures from being
able to manipulate the environment?
und = () -> {
simTM {
if ( simRTM{halts(und)} )
loop_forever()
else
return
}
}
So in ths above construction, simTM creates a contour around a new
context, which is empty?
If so, am I wrong in remembering that I might have mentioned something
like this, and didn't you say you would just ban such constructs
from the sandbox?
On 11/18/25 3:47 PM, Mike Terry wrote:
On 18/11/2025 03:10, dart200 wrote:
On 11/17/25 7:07 PM, Kaz Kylheku wrote:
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string >>>>>>> describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to compute >>>>> whether the machine described halts
But that isn't true; you certainly can do that. Just not using one
unified algorithm that works for absolutely all such strings.
When it /does/ work, it's certainly not based on any input other than
the string.
yes i meant generally
you also can't compute generally whether you can or cannot compute whether a an machine
description halts or not
What does that mean though?
It sounds like you're asking for a /single/ TM that given /any/ machine description D, must
compute "whether or not D's halting is computable". [And saying no such single TM exists?]
yes, it takes /single/ machine input a outputs whether /any/ other machine could compute the input
machine's halting semantics.
On 11/19/25 10:48 AM, Kaz Kylheku wrote:
On 2025-11-19, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/19/25 9:17 AM, Tristan Wibberley wrote:
On 19/11/2025 01:40, dart200 wrote:
i'm currently a bit stumped on dealing with a possible a halting paradox >>>>> constructed within RTMs, using an RTM simulating a TM simulating an RTM. >>>>> this chain similarly mechanically cuts off the required information to >>>>> avoid a paradox, kinda like a TM alone. not fully confident it's a
problem or not
It sounds equivalent to problems of security wrt. leaky sandboxes.
Interesting stuff. Maybe valuable too.
i'm actually pretty distraught over this rn. who's gunna care if all i
did was reframe the halting problem?? i'm stuck on quite literally a
liar's paradox, with emphasis on a clear lie taking place
specifically: the simulated TM simulating an RTM is lying about the true >>> runtime context, bamboozling reflection's ability to prevent paradox
construction
Don't you have mechanisms to prevent the procedures from being
able to manipulate the environment?
und = () -> {
simTM {
if ( simRTM{halts(und)} )
loop_forever()
else
return
}
}
So in ths above construction, simTM creates a contour around a new
context, which is empty?
essentially yes. simTM does not support REFLECT, so simulations within
the simulation have no method of accessing the runtime context, creating
the illusion (or lie) of an null context
On 11/19/25 9:17 AM, Tristan Wibberley wrote:
On 19/11/2025 01:40, dart200 wrote:
i'm currently a bit stumped on dealing with a possible a halting paradox >>> constructed within RTMs, using an RTM simulating a TM simulating an RTM. >>> this chain similarly mechanically cuts off the required information to
avoid a paradox, kinda like a TM alone. not fully confident it's a
problem or not
It sounds equivalent to problems of security wrt. leaky sandboxes.
Interesting stuff. Maybe valuable too.
i'm actually pretty distraught over this rn. who's gunna care if all i
did was reframe the halting problem?? i'm stuck on quite literally a
liar's paradox, with emphasis on a clear lie taking place
specifically: the simulated TM simulating an RTM is lying about the true runtime context, bamboozling reflection's ability to prevent paradox construction
und = () -> {
simTM {
if ( simRTM{halts(und)} )
loop_forever()
else
return
}
}
i don't actually know if this is valid tho. within RTMs, when a simRTM simulates a RELFECT operation, it also must call REFLECT to get the
runtime context from whatever is running it. since TMs don't support
this, the simRTM run within simTM cannot do this, and therefore it's not technically a per-specification RTM simulation. it's actually a hackjob lying about the true runtime context
but i'm still not sure what's supposed to happen. maybe there's a way to reckon about this, maybe i just blew that damned incompleteness hole in
my reflective turing machine theory cause of fucking liars
also, who tf would publish any of this? you can't get "maybe
interesting" ideas into a journal, that's not good enough for the 100% always-right rat race used to justify the meritocratic oppression
mainstream economic ideology runs off of
syntax note: curly bases are used to specify an unnamed lambda function
as a function parameter (kotlin inspired)
simRTM{halts(und)} is equivalent to simRTM(() -> halts(und))
On 19/11/2025 18:26, dart200 wrote:
On 11/18/25 3:47 PM, Mike Terry wrote:
On 18/11/2025 03:10, dart200 wrote:
On 11/17/25 7:07 PM, Kaz Kylheku wrote:
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string >>>>>>>> describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to
compute
whether the machine described halts
But that isn't true; you certainly can do that. Just not using one
unified algorithm that works for absolutely all such strings.
When it /does/ work, it's certainly not based on any input other than >>>>> the string.
yes i meant generally
you also can't compute generally whether you can or cannot compute
whether a an machine description halts or not
What does that mean though?
It sounds like you're asking for a /single/ TM that given /any/
machine description D, must compute "whether or not D's halting is
computable". [And saying no such single TM exists?]
yes, it takes /single/ machine input a outputs whether /any/ other
machine could compute the input machine's halting semantics.
Have you read Kaz's response to my post? That explains why for any
given machine, there is always some other machine that computes the
halting status of that machine. Basically there are only two possible behaviours: halts or neverhalts.
We just need two machines H1 and H0
that straight away return halts/neverhalts respectively. For any
machine M, either H1 or H0 correctly computes M's halting status, so assuming normal terminology use, any single M is decideable. (And by extension, halting for any finite set of machines is decidable.)
Sometimes people attempt to come up with reasons why H1 and H0 don't count. That was certainly PO's response, and his explanation was that
H1 and H0 are disqualified as halt deciders because they "aren't even trying". He has never explained what it means for a TM to "not really
try" to do something; of course, TMs are just what they are, without "trying" to do anything. We're not talking about an olympic sport where there are points awarded for effort/artistic interpretation etc., it's
all just "whether they work".
[Also, people like PO often confuse what the halting problem says,
believing that it is implying that there is some machine M which "cannot
be decided". That's a misunderstanding...]
Anyhow, all of that is completely missing the point of the halting
problem - that is to find /one/ machine H that can decide /any/ input M_desc. Finding a machine that can decide one specific input is trivial.
Mike.
On 2025-11-19, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/19/25 10:48 AM, Kaz Kylheku wrote:
On 2025-11-19, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/19/25 9:17 AM, Tristan Wibberley wrote:
On 19/11/2025 01:40, dart200 wrote:
i'm currently a bit stumped on dealing with a possible a halting paradox >>>>>> constructed within RTMs, using an RTM simulating a TM simulating an RTM. >>>>>> this chain similarly mechanically cuts off the required information to >>>>>> avoid a paradox, kinda like a TM alone. not fully confident it's a >>>>>> problem or not
It sounds equivalent to problems of security wrt. leaky sandboxes.
Interesting stuff. Maybe valuable too.
i'm actually pretty distraught over this rn. who's gunna care if all i >>>> did was reframe the halting problem?? i'm stuck on quite literally a
liar's paradox, with emphasis on a clear lie taking place
specifically: the simulated TM simulating an RTM is lying about the true >>>> runtime context, bamboozling reflection's ability to prevent paradox
construction
Don't you have mechanisms to prevent the procedures from being
able to manipulate the environment?
und = () -> {
simTM {
if ( simRTM{halts(und)} )
loop_forever()
else
return
}
}
So in ths above construction, simTM creates a contour around a new
context, which is empty?
essentially yes. simTM does not support REFLECT, so simulations within
the simulation have no method of accessing the runtime context, creating
the illusion (or lie) of an null context
In a computational system with context, functions do not have a halting status that depends only on their arguments, but on their arguments plus context.
Therefore, the question "does this function halt when applied to these arguments" isn't right in this domain; it needs to be "does this function,
in a context with such and such content, and these arguments, halt".
Then, to have a diagonal case whch opposes the decider, that diagonal
case has to be sure to be using that same context, otherwise it
is not diagonal; i.e.
in_context C { // <-- but but construct is banned!
// D, in context C "behaves opposite" to the decision
// produced by H regarding D in context C:
D() {
if (H(D, C))
loop();
}
}
Or:
D() {
let C = getParentContext(); // likewise banned?
if (H(D, C))
loop();
}
On 19/11/2025 18:26, dart200 wrote:
On 11/18/25 3:47 PM, Mike Terry wrote:
On 18/11/2025 03:10, dart200 wrote:
On 11/17/25 7:07 PM, Kaz Kylheku wrote:
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string >>>>>>>> describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to
compute
whether the machine described halts
But that isn't true; you certainly can do that. Just not using one
unified algorithm that works for absolutely all such strings.
When it /does/ work, it's certainly not based on any input other than >>>>> the string.
yes i meant generally
you also can't compute generally whether you can or cannot compute
whether a an machine description halts or not
What does that mean though?
It sounds like you're asking for a /single/ TM that given /any/
machine description D, must compute "whether or not D's halting is
computable". [And saying no such single TM exists?]
yes, it takes /single/ machine input a outputs whether /any/ other
machine could compute the input machine's halting semantics.
Have you read Kaz's response to my post? That explains why for any
given machine, there is always some other machine that computes the
halting status of that machine. Basically there are only two possible behaviours: halts or neverhalts. We just need two machines H1 and H0
that straight away return halts/neverhalts respectively. For any
machine M, either H1 or H0 correctly computes M's halting status, so assuming normal terminology use, any single M is decideable. (And by extension, halting for any finite set of machines is decidable.)
Sometimes people attempt to come up with reasons why H1 and H0 don't count. That was certainly PO's response, and his explanation was that
H1 and H0 are disqualified as halt deciders because they "aren't even trying". He has never explained what it means for a TM to "not really
try" to do something; of course, TMs are just what they are, without "trying" to do anything. We're not talking about an olympic sport where there are points awarded for effort/artistic interpretation etc., it's
all just "whether they work".
[Also, people like PO often confuse what the halting problem says,
believing that it is implying that there is some machine M which "cannot
be decided". That's a misunderstanding...]
Anyhow, all of that is completely missing the point of the halting
problem - that is to find /one/ machine H that can decide /any/ input M_desc. Finding a machine that can decide one specific input is trivial.
The halting problem requires HHH to report
on behavior other than the behavior encoded
in HHH/DD.
On 11/18/25 3:10 PM, Chris M. Thomasson wrote:
On 11/17/2025 9:18 PM, dart200 wrote:
On 11/17/25 7:36 PM, Chris M. Thomasson wrote:
On 11/17/2025 7:10 PM, dart200 wrote:
On 11/17/25 7:07 PM, Kaz Kylheku wrote:
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string >>>>>>>>> describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to >>>>>>> compute
whether the machine described halts
But that isn't true; you certainly can do that. Just not using one >>>>>> unified algorithm that works for absolutely all such strings.
When it /does/ work, it's certainly not based on any input other than >>>>>> the string.
yes i meant generally
you also can't compute generally whether you can or cannot compute
whether a an machine description halts or not
Didn't you suggest you have a solution to the halting problem using
reflection?
yes, i was speaking to the consensus understanding in what you've quoted >>>
Okay. Well, 100% per-path coverage is one way we can say that DD halts
_and_ does not halt. I made the fuzzer for Olcotts DD for fun. Its NOT
a solution to the halting problem. Actually, he raised some red flags
in my mind when he tried to tell me that BASIC cannot handle recursion...
depends on which BASIC tho, eh?
Programming BASIC brings back memories of when I was a little kid.
Actually, you should be able to mock up your reflection system. Have
you made any headway?
not at all
i'm working on the logical consistency of the theory, which is going to
be far simpler than actual implementation
i'm currently a bit stumped on dealing with a possible a halting paradox constructed within RTMs, using an RTM simulating a TM simulating an RTM. this chain similarly mechanically cuts off the required information to
avoid a paradox, kinda like a TM alone. not fully confident it's a
problem or not
i may write a post on it
On 19/11/2025 18:37, olcott wrote:
The halting problem requires HHH to report
on behavior other than the behavior encoded
in HHH/DD.
Is the Halts property the same regardless?
--
Tristan Wibberley
The message body is Copyright (C) 2025 Tristan Wibberley except
citations and quotations noted. All Rights Reserved except that you may,
of course, cite it academically giving credit to me, distribute it
verbatim as part of a usenet system or its archives, and use it to
promote my greatness and general superiority without misrepresentation
of my opinions other than my opinion of my greatness and general
superiority which you _may_ misrepresent. You definitely MAY NOT train
any production AI system with it but you may train experimental AI that
will only be used for evaluation of the AI methods it implements.
The sound basis of this reasoning is the
semantics of the C programming language.
On 19/11/2025 18:26, dart200 wrote:
On 11/18/25 3:47 PM, Mike Terry wrote:
On 18/11/2025 03:10, dart200 wrote:
On 11/17/25 7:07 PM, Kaz Kylheku wrote:
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string >>>>>>>> describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to
compute
whether the machine described halts
But that isn't true; you certainly can do that. Just not using one
unified algorithm that works for absolutely all such strings.
When it /does/ work, it's certainly not based on any input other than >>>>> the string.
yes i meant generally
you also can't compute generally whether you can or cannot compute
whether a an machine description halts or not
What does that mean though?
It sounds like you're asking for a /single/ TM that given /any/
machine description D, must compute "whether or not D's halting is
computable". [And saying no such single TM exists?]
yes, it takes /single/ machine input a outputs whether /any/ other
machine could compute the input machine's halting semantics.
Have you read Kaz's response to my post? That explains why for any
given machine, there is always some other machine that computes the
halting status of that machine. Basically there are only two possible behaviours: halts or neverhalts. We just need two machines H1 and H0
that straight away return halts/neverhalts respectively. For any
machine M, either H1 or H0 correctly computes M's halting status, so assuming normal terminology use, any single M is decideable. (And by extension, halting for any finite set of machines is decidable.)
Sometimes people attempt to come up with reasons why H1 and H0 don't count. That was certainly PO's response, and his explanation was that
H1 and H0 are disqualified as halt deciders because they "aren't even trying". He has never explained what it means for a TM to "not really
try" to do something; of course, TMs are just what they are, without "trying" to do anything. We're not talking about an olympic sport where there are points awarded for effort/artistic interpretation etc., it's
all just "whether they work".
[Also, people like PO often confuse what the halting problem says,Jeff Barnett
believing that it is implying that there is some machine M which "cannot
be decided". That's a misunderstanding...]
Anyhow, all of that is completely missing the point of the halting
problem - that is to find /one/ machine H that can decide /any/ input M_desc. Finding a machine that can decide one specific input is trivial.--
On 11/19/2025 12:42 PM, Mike Terry wrote:
On 19/11/2025 18:26, dart200 wrote:
On 11/18/25 3:47 PM, Mike Terry wrote:
On 18/11/2025 03:10, dart200 wrote:
On 11/17/25 7:07 PM, Kaz Kylheku wrote:
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string >>>>>>>>> describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to >>>>>>> compute
whether the machine described halts
But that isn't true; you certainly can do that. Just not using one >>>>>> unified algorithm that works for absolutely all such strings.
When it /does/ work, it's certainly not based on any input other than >>>>>> the string.
yes i meant generally
you also can't compute generally whether you can or cannot compute
whether a an machine description halts or not
What does that mean though?
It sounds like you're asking for a /single/ TM that given /any/
machine description D, must compute "whether or not D's halting is
computable". [And saying no such single TM exists?]
yes, it takes /single/ machine input a outputs whether /any/ other
machine could compute the input machine's halting semantics.
Have you read Kaz's response to my post? That explains why for any
given machine, there is always some other machine that computes the
halting status of that machine. Basically there are only two possible
behaviours: halts or neverhalts. We just need two machines H1 and H0
that straight away return halts/neverhalts respectively. For any
machine M, either H1 or H0 correctly computes M's halting status, so
assuming normal terminology use, any single M is decideable. (And by
extension, halting for any finite set of machines is decidable.)
Sometimes people attempt to come up with reasons why H1 and H0 don't
count. That was certainly PO's response, and his explanation was that
H1 and H0 are disqualified as halt deciders because they "aren't even
trying". He has never explained what it means for a TM to "not really
try" to do something; of course, TMs are just what they are, without
"trying" to do anything. We're not talking about an olympic sport
where there are points awarded for effort/artistic interpretation
etc., it's all just "whether they work".
They don't count as *deciders* plain and simple because a *decider* must decide correctly on all possible inputs. Even a partial decider must,
for all possible inputs, return "yes", "no", or "don't know" and must be correct when returning one of the first two. So that any machine that returns the same value for all inputs, is a decider in a domain where
the onto range contains one and only one value, e.g., a halt decider
that decides halting status for all possible non-halting (TM, data)
input - not very interesting. Neither is the example of a decider that returns "don't know" for all inputs. (Just to state the obvious: when something is said to return a value halting of that something is entailed.)
[Also, people like PO often confuse what the halting problem says,Jeff Barnett
believing that it is implying that there is some machine M which
"cannot be decided". That's a misunderstanding...]
Anyhow, all of that is completely missing the point of the halting
problem - that is to find /one/ machine H that can decide /any/ input
M_desc. Finding a machine that can decide one specific input is
trivial.--
On 11/19/2025 12:42 PM, Mike Terry wrote:
On 19/11/2025 18:26, dart200 wrote:
On 11/18/25 3:47 PM, Mike Terry wrote:
On 18/11/2025 03:10, dart200 wrote:
On 11/17/25 7:07 PM, Kaz Kylheku wrote:
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string >>>>>>>>> describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to compute
whether the machine described halts
But that isn't true; you certainly can do that. Just not using one >>>>>> unified algorithm that works for absolutely all such strings.
When it /does/ work, it's certainly not based on any input other than >>>>>> the string.
yes i meant generally
you also can't compute generally whether you can or cannot compute whether a an machine
description halts or not
What does that mean though?
It sounds like you're asking for a /single/ TM that given /any/ machine description D, must
compute "whether or not D's halting is computable". [And saying no such single TM exists?]
yes, it takes /single/ machine input a outputs whether /any/ other machine could compute the
input machine's halting semantics.
Have you read Kaz's response to my post? That explains why for any given machine, there is always
some other machine that computes the halting status of that machine. Basically there are only two
possible behaviours: halts or neverhalts. We just need two machines H1 and H0 that straight away
return halts/neverhalts respectively. For any machine M, either H1 or H0 correctly computes M's
halting status, so assuming normal terminology use, any single M is decideable. (And by
extension, halting for any finite set of machines is decidable.)
Sometimes people attempt to come up with reasons why H1 and H0 don't count. That was certainly
PO's response, and his explanation was that H1 and H0 are disqualified as halt deciders because
they "aren't even trying". He has never explained what it means for a TM to "not really try" to
do something; of course, TMs are just what they are, without "trying" to do anything. We're not
talking about an olympic sport where there are points awarded for effort/artistic interpretation
etc., it's all just "whether they work".
They don't count as *deciders* plain and simple because a *decider* must decide correctly on all
possible inputs.
Even a partial decider must, for all possible inputs, return "yes", "no", or "don't
know" and must be correct when returning one of the first two. So that any machine that returns the
same value for all inputs, is a decider in a domain where the onto range contains one and only one
value, e.g., a halt decider that decides halting status for all possible non-halting (TM, data)
input - not very interesting.
Neither is the example of a decider that returns "don't know" for all inputs. (Just to state the obvious: when something is said to return a value halting of that
something is entailed.)
On 19/11/2025 18:26, dart200 wrote:
On 11/18/25 3:47 PM, Mike Terry wrote:
On 18/11/2025 03:10, dart200 wrote:
On 11/17/25 7:07 PM, Kaz Kylheku wrote:
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string >>>>>>>> describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to
compute
whether the machine described halts
But that isn't true; you certainly can do that. Just not using one
unified algorithm that works for absolutely all such strings.
When it /does/ work, it's certainly not based on any input other than >>>>> the string.
yes i meant generally
you also can't compute generally whether you can or cannot compute
whether a an machine description halts or not
What does that mean though?
It sounds like you're asking for a /single/ TM that given /any/
machine description D, must compute "whether or not D's halting is
computable". [And saying no such single TM exists?]
yes, it takes /single/ machine input a outputs whether /any/ other
machine could compute the input machine's halting semantics.
Have you read Kaz's response to my post? That explains why for any
given machine, there is always some other machine that computes the
halting status of that machine. Basically there are only two possible behaviours: halts or neverhalts. We just need two machines H1 and H0
that straight away return halts/neverhalts respectively. For any
machine M, either H1 or H0 correctly computes M's halting status, so assuming normal terminology use, any single M is decideable. (And by extension, halting for any finite set of machines is decidable.)
Sometimes people attempt to come up with reasons why H1 and H0 don't count. That was certainly PO's response, and his explanation was that
H1 and H0 are disqualified as halt deciders because they "aren't even trying". He has never explained what it means for a TM to "not really
try" to do something; of course, TMs are just what they are, without "trying" to do anything. We're not talking about an olympic sport where there are points awarded for effort/artistic interpretation etc., it's
all just "whether they work".
[Also, people like PO often confuse what the halting problem says,
believing that it is implying that there is some machine M which "cannot
be decided". That's a misunderstanding...]
Anyhow, all of that is completely missing the point of the halting
problem - that is to find /one/ machine H that can decide /any/ input M_desc. Finding a machine that can decide one specific input is trivial.
Mike.
a) you can construct halting paradoxes that contradicts multiple and possibly even infinite deciders. certainly any finite set, after which
On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
a) you can construct halting paradoxes that contradicts multiple and
possibly even infinite deciders. certainly any finite set, after which
This is not possible in general. The diagonal test case must make
exactly one decision and then behave in a contradictory way: halt or
not. If it interrogates as few as two deciders, it becomes intractable
if their decisions differ: to contradict one is to agree with the other.
If the deciders are H0(P) { return 0; } and H1(P) { return 1; } you can
see that between the two of them, they cover the entire space: there
cannot be a signal case whch both of these don't get right. One
correctly decides all nonterminating cases; the other correctly decies
all terminating cases, and every case is one or the other.
On 11/19/25 6:29 PM, Kaz Kylheku wrote:
On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
a) you can construct halting paradoxes that contradicts multiple and
possibly even infinite deciders. certainly any finite set, after which
This is not possible in general. The diagonal test case must make
exactly one decision and then behave in a contradictory way: halt or
not. If it interrogates as few as two deciders, it becomes intractable
if their decisions differ: to contradict one is to agree with the other.
If the deciders are H0(P) { return 0; } and H1(P) { return 1; } you can
see that between the two of them, they cover the entire space: there
cannot be a signal case whch both of these don't get right. One
correctly decides all nonterminating cases; the other correctly decies
all terminating cases, and every case is one or the other.
common man those deciders do not provide an /effectively computable/ interface and you know it
try again, it's quite simple to produce a paradox that confounds two legitimate deciders that genuinely never give a wrong answer
On 2025-11-19, olcott <polcott333@gmail.com> wrote:that you dishonestly erased most of the context
The sound basis of this reasoning is the
semantics of the C programming language.
... and, note,
On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/19/25 6:29 PM, Kaz Kylheku wrote:
On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
a) you can construct halting paradoxes that contradicts multiple and
possibly even infinite deciders. certainly any finite set, after which
This is not possible in general. The diagonal test case must make
exactly one decision and then behave in a contradictory way: halt or
not. If it interrogates as few as two deciders, it becomes intractable
if their decisions differ: to contradict one is to agree with the other. >>>
If the deciders are H0(P) { return 0; } and H1(P) { return 1; } you can
see that between the two of them, they cover the entire space: there
cannot be a signal case whch both of these don't get right. One
correctly decides all nonterminating cases; the other correctly decies
all terminating cases, and every case is one or the other.
common man those deciders do not provide an /effectively computable/
interface and you know it
try again, it's quite simple to produce a paradox that confounds two
legitimate deciders that genuinely never give a wrong answer
But we have a proof that deciders which never give a wrong answer do not exist.
If halting algorithms existed
- they would all agree with each other and thus look the same from the
ouside and so wouldn't constitute a multi-decider aggregate.
- it would not be /possible/ to contradict them: they never give
a wrong answer!
So if we want to develop diagonal cases whch contradict deciders,
we have to accept that we are targeting imperfect, partial deciders
(by doing so, showing them to be that way).
On 11/19/2025 3:41 PM, Kaz Kylheku wrote:
On 2025-11-19, olcott <polcott333@gmail.com> wrote:that you dishonestly erased most of the context
The sound basis of this reasoning is the
semantics of the C programming language.
... and, note,
On 2025-11-20, olcott <polcott333@gmail.com> wrote:
On 11/19/2025 3:41 PM, Kaz Kylheku wrote:
On 2025-11-19, olcott <polcott333@gmail.com> wrote:that you dishonestly erased most of the context
The sound basis of this reasoning is the
semantics of the C programming language.
... and, note,
That's just the same pseudo-code snppet you've posted
hundreds of times.
On 2025-11-19, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/19/25 10:48 AM, Kaz Kylheku wrote:
On 2025-11-19, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/19/25 9:17 AM, Tristan Wibberley wrote:
On 19/11/2025 01:40, dart200 wrote:
i'm currently a bit stumped on dealing with a possible a halting paradox >>>>>> constructed within RTMs, using an RTM simulating a TM simulating an RTM. >>>>>> this chain similarly mechanically cuts off the required information to >>>>>> avoid a paradox, kinda like a TM alone. not fully confident it's a >>>>>> problem or not
It sounds equivalent to problems of security wrt. leaky sandboxes.
Interesting stuff. Maybe valuable too.
i'm actually pretty distraught over this rn. who's gunna care if all i >>>> did was reframe the halting problem?? i'm stuck on quite literally a
liar's paradox, with emphasis on a clear lie taking place
specifically: the simulated TM simulating an RTM is lying about the true >>>> runtime context, bamboozling reflection's ability to prevent paradox
construction
Don't you have mechanisms to prevent the procedures from being
able to manipulate the environment?
und = () -> {
simTM {
if ( simRTM{halts(und)} )
loop_forever()
else
return
}
}
So in ths above construction, simTM creates a contour around a new
context, which is empty?
essentially yes. simTM does not support REFLECT, so simulations within
the simulation have no method of accessing the runtime context, creating
the illusion (or lie) of an null context
In a computational system with context, functions do not have a halting status that depends only on their arguments, but on their arguments plus context.
Therefore, the question "does this function halt when applied to these arguments" isn't right in this domain; it needs to be "does this function,
in a context with such and such content, and these arguments, halt".
Then, to have a diagonal case whch opposes the decider, that diagonal
case has to be sure to be using that same context, otherwise it
is not diagonal; i.e.
in_context C { // <-- but but construct is banned!
// D, in context C "behaves opposite" to the decision
// produced by H regarding D in context C:
D() {
if (H(D, C))
loop();
}
}
Or:
D() {
let C = getParentContext(); // likewise banned?
if (H(D, C))
loop();
}
On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
a) you can construct halting paradoxes that contradicts multiple and
possibly even infinite deciders. certainly any finite set, after which
This is not possible in general. The diagonal test case must make
exactly one decision and then behave in a contradictory way: halt or
not. If it interrogates as few as two deciders, it becomes intractable
if their decisions differ: to contradict one is to agree with the other.
On 11/19/25 6:58 PM, Kaz Kylheku wrote:
On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/19/25 6:29 PM, Kaz Kylheku wrote:
On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
a) you can construct halting paradoxes that contradicts multiple and >>>>> possibly even infinite deciders. certainly any finite set, after which >>>>This is not possible in general. The diagonal test case must make
exactly one decision and then behave in a contradictory way: halt or
not. If it interrogates as few as two deciders, it becomes intractable >>>> if their decisions differ: to contradict one is to agree with the other. >>>>
If the deciders are H0(P) { return 0; } and H1(P) { return 1; } you can >>>> see that between the two of them, they cover the entire space: there
cannot be a signal case whch both of these don't get right. One
correctly decides all nonterminating cases; the other correctly decies >>>> all terminating cases, and every case is one or the other.
common man those deciders do not provide an /effectively computable/
interface and you know it
try again, it's quite simple to produce a paradox that confounds two
legitimate deciders that genuinely never give a wrong answer
But we have a proof that deciders which never give a wrong answer do not
exist.
If halting algorithms existed
- they would all agree with each other and thus look the same from the
ouside and so wouldn't constitute a multi-decider aggregate.
- it would not be /possible/ to contradict them: they never give
a wrong answer!
So if we want to develop diagonal cases whch contradict deciders,
we have to accept that we are targeting imperfect, partial deciders
(by doing so, showing them to be that way).
for the sake of proof/example assume they are honest until you produce
the paradox
On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/19/25 6:58 PM, Kaz Kylheku wrote:
On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/19/25 6:29 PM, Kaz Kylheku wrote:
On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
a) you can construct halting paradoxes that contradicts multiple and >>>>>> possibly even infinite deciders. certainly any finite set, after which >>>>>This is not possible in general. The diagonal test case must make
exactly one decision and then behave in a contradictory way: halt or >>>>> not. If it interrogates as few as two deciders, it becomes intractable >>>>> if their decisions differ: to contradict one is to agree with the other. >>>>>
If the deciders are H0(P) { return 0; } and H1(P) { return 1; } you can >>>>> see that between the two of them, they cover the entire space: there >>>>> cannot be a signal case whch both of these don't get right. One
correctly decides all nonterminating cases; the other correctly decies >>>>> all terminating cases, and every case is one or the other.
common man those deciders do not provide an /effectively computable/
interface and you know it
try again, it's quite simple to produce a paradox that confounds two
legitimate deciders that genuinely never give a wrong answer
But we have a proof that deciders which never give a wrong answer do not >>> exist.
If halting algorithms existed
- they would all agree with each other and thus look the same from the
ouside and so wouldn't constitute a multi-decider aggregate.
- it would not be /possible/ to contradict them: they never give
a wrong answer!
So if we want to develop diagonal cases whch contradict deciders,
we have to accept that we are targeting imperfect, partial deciders
(by doing so, showing them to be that way).
for the sake of proof/example assume they are honest until you produce
the paradox
By "until", are you referring to some temporal concept? There is a time variable in the system such that a decider can be introduced, and then
for at time, there exists no diagonal (or other) case until someone
writes it?
On 11/20/25 11:55 AM, Kaz Kylheku wrote:
On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/19/25 6:58 PM, Kaz Kylheku wrote:
On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/19/25 6:29 PM, Kaz Kylheku wrote:
On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
a) you can construct halting paradoxes that contradicts multiple and >>>>>>> possibly even infinite deciders. certainly any finite set, after which >>>>>>This is not possible in general. The diagonal test case must make
exactly one decision and then behave in a contradictory way: halt or >>>>>> not. If it interrogates as few as two deciders, it becomes intractable >>>>>> if their decisions differ: to contradict one is to agree with the other. >>>>>>
If the deciders are H0(P) { return 0; } and H1(P) { return 1; } you can >>>>>> see that between the two of them, they cover the entire space: there >>>>>> cannot be a signal case whch both of these don't get right. One
correctly decides all nonterminating cases; the other correctly decies >>>>>> all terminating cases, and every case is one or the other.
common man those deciders do not provide an /effectively computable/ >>>>> interface and you know it
try again, it's quite simple to produce a paradox that confounds two >>>>> legitimate deciders that genuinely never give a wrong answer
But we have a proof that deciders which never give a wrong answer do not >>>> exist.
If halting algorithms existed
- they would all agree with each other and thus look the same from the >>>> ouside and so wouldn't constitute a multi-decider aggregate.
- it would not be /possible/ to contradict them: they never give
a wrong answer!
So if we want to develop diagonal cases whch contradict deciders,
we have to accept that we are targeting imperfect, partial deciders
(by doing so, showing them to be that way).
for the sake of proof/example assume they are honest until you produce
the paradox
By "until", are you referring to some temporal concept? There is a time
variable in the system such that a decider can be introduced, and then
for at time, there exists no diagonal (or other) case until someone
writes it?
assume the premise exist and show a contradiction for both deciders
within one machine
On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/20/25 11:55 AM, Kaz Kylheku wrote:
On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/19/25 6:58 PM, Kaz Kylheku wrote:
On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/19/25 6:29 PM, Kaz Kylheku wrote:
On 2025-11-20, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>> a) you can construct halting paradoxes that contradicts multiple and >>>>>>>> possibly even infinite deciders. certainly any finite set, after which >>>>>>>
This is not possible in general. The diagonal test case must make >>>>>>> exactly one decision and then behave in a contradictory way: halt or >>>>>>> not. If it interrogates as few as two deciders, it becomes intractable >>>>>>> if their decisions differ: to contradict one is to agree with the other.
If the deciders are H0(P) { return 0; } and H1(P) { return 1; } you can >>>>>>> see that between the two of them, they cover the entire space: there >>>>>>> cannot be a signal case whch both of these don't get right. One
correctly decides all nonterminating cases; the other correctly decies >>>>>>> all terminating cases, and every case is one or the other.
common man those deciders do not provide an /effectively computable/ >>>>>> interface and you know it
try again, it's quite simple to produce a paradox that confounds two >>>>>> legitimate deciders that genuinely never give a wrong answer
But we have a proof that deciders which never give a wrong answer do not >>>>> exist.
If halting algorithms existed
- they would all agree with each other and thus look the same from the >>>>> ouside and so wouldn't constitute a multi-decider aggregate.
- it would not be /possible/ to contradict them: they never give
a wrong answer!
So if we want to develop diagonal cases whch contradict deciders,
we have to accept that we are targeting imperfect, partial deciders
(by doing so, showing them to be that way).
for the sake of proof/example assume they are honest until you produce >>>> the paradox
By "until", are you referring to some temporal concept? There is a time
variable in the system such that a decider can be introduced, and then
for at time, there exists no diagonal (or other) case until someone
writes it?
assume the premise exist and show a contradiction for both deciders
within one machine
If one decider says true, and the other false, for the contradicting
case, then no can do. Infinitely looping, or terminating, contradicts
only one of the decider, agreeing with the other. There isn't a third
choice of behavior.
On 11/19/2025 10:42 PM, Kaz Kylheku wrote:
On 2025-11-20, olcott <polcott333@gmail.com> wrote:
On 11/19/2025 3:41 PM, Kaz Kylheku wrote:
On 2025-11-19, olcott <polcott333@gmail.com> wrote:that you dishonestly erased most of the context
The sound basis of this reasoning is the
semantics of the C programming language.
... and, note,
That's just the same pseudo-code snppet you've posted
hundreds of times.
The idea is that I will keep repeating this
until you pay attention
On 11/19/2025 10:42 PM, Kaz Kylheku wrote:
On 2025-11-20, olcott <polcott333@gmail.com> wrote:
On 11/19/2025 3:41 PM, Kaz Kylheku wrote:
On 2025-11-19, olcott <polcott333@gmail.com> wrote:that you dishonestly erased most of the context
The sound basis of this reasoning is the
semantics of the C programming language.
... and, note,
That's just the same pseudo-code snppet you've posted
hundreds of times.
The idea is that I will keep repeating this
until you pay attention
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
int main()
{
HHH(DD);
}
HHH simulates DD that calls HHH(DD)
that simulates DD that calls HHH(DD)...
On 2025-11-20, olcott <polcott333@gmail.com> wrote:
On 11/19/2025 10:42 PM, Kaz Kylheku wrote:
On 2025-11-20, olcott <polcott333@gmail.com> wrote:
On 11/19/2025 3:41 PM, Kaz Kylheku wrote:
On 2025-11-19, olcott <polcott333@gmail.com> wrote:that you dishonestly erased most of the context
The sound basis of this reasoning is the
semantics of the C programming language.
... and, note,
That's just the same pseudo-code snppet you've posted
hundreds of times.
The idea is that I will keep repeating this
until you pay attention
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
int main()
{
HHH(DD);
}
I've given ths an incredible amount of attention.
HHH simulates DD that calls HHH(DD)
that simulates DD that calls HHH(DD)...
If HHH(DD) returns 0, it's this;
HHH simulates DD that calls HHH(DD)
- that simulates DD that calls HHH(DD)...
- that simulates DD that calls HHH(DD)...
- but only partially, returning 0.
- such that DD terminates.
- but only partially, returning 0.
- such that DD terminates.
Adding another level:
HHH simulates DD that calls HHH(DD)
- that simulates DD that calls HHH(DD)...
- that simulates DD that calls HHH(DD)...
- that simulates DD that calls HHH(DD)...
- that ...
- that ...
- that ...
- but only partially, returning 0.
- such that DD terminates.
- but only partially, returning 0.
- such that DD terminates.
- but only partially, returning 0.
- such that DD terminates.
Infinite simulation tower: finite DD's.
Since you don't grok this but I do, obviously the one who has
paid more attention is me.
On 18/11/2025 03:10, dart200 wrote:
On 11/17/25 7:07 PM, Kaz Kylheku wrote:
On 2025-11-18, dart200 <user7160@newsgrouper.org.invalid> wrote:
On 11/17/25 4:31 PM, olcott wrote:
On 11/17/2025 6:06 PM, dart200 wrote:
On 11/17/25 3:35 PM, olcott wrote:
The halting problem is requiring deciders to
compute information that is not contained in
their input.
ur agreeing with turing and the halting problem:
one cannot compute whether a machine halts or not from the string
describing the machine
That the halting problem limits computation
is like this very extreme example:
Predict who the next president of the United States
will be entirely on the basis of √2 (square root of 2).
That cannot be derived from the input.
bruh, ur agreeing with the halting problem:
one cannot take the string describing the machine, and use it to
compute
whether the machine described halts
But that isn't true; you certainly can do that. Just not using one
unified algorithm that works for absolutely all such strings.
When it /does/ work, it's certainly not based on any input other than
the string.
yes i meant generally
you also can't compute generally whether you can or cannot compute
whether a an machine description halts or not
What does that mean though?
It sounds like you're asking for a /single/ TM that given /any/ machine description D, must compute "whether or not D's halting is computable". [And saying no such single TM exists?]
The problem is in the phrase within quotes. Surely that phrase means "whether or not there exists a TM that computes whether the given D
halts or not"? If not, what does it mean?
| Sysop: | DaiTengu |
|---|---|
| Location: | Appleton, WI |
| Users: | 1,089 |
| Nodes: | 10 (0 / 10) |
| Uptime: | 155:08:31 |
| Calls: | 13,921 |
| Calls today: | 2 |
| Files: | 187,021 |
| D/L today: |
3,912 files (989M bytes) |
| Messages: | 2,457,192 |