On 13/08/2025 07:05, olcott wrote:
On 8/13/2025 12:56 AM, Lawrence D'Oliveiro wrote:
Feel free to explain how it *is* self-evident.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
I can directly see that that when HHH(DD) is executed
that this begins simulating DD that calls HHH(DD) that
begins simulating DD that calls HHH(DD) again.
...and when HHH eventually reports?
But of course that's self-evident.
What exactly is “self-evident” supposed to mean? Please explain.
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern: abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement: return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
What value should HHH(DD) correctly return?
What exactly is “self-evident” supposed to mean? Please explain.
On 2025-08-14 02:40, Lawrence D'Oliveiro wrote:
What exactly is “self-evident” supposed to mean? Please explain.
It means that the message providing the answer contains, in itself, all
the evidence anyone needs to easily determine that the answer is true.
In practice, it is often used when the answer is not actually true.
Compare with a "no-brainer", where, in practice, the decision is almost always not the one that would be chosen if the brain were fully engaged.
On 8/14/2025 8:06 AM, James Kuyper wrote:
On 2025-08-14 02:40, Lawrence D'Oliveiro wrote:
What exactly is “self-evident” supposed to mean? Please explain.
It means that the message providing the answer contains, in itself, all
the evidence anyone needs to easily determine that the answer is true.
In practice, it is often used when the answer is not actually true.
Compare with a "no-brainer", where, in practice, the decision is almost
always not the one that would be chosen if the brain were fully engaged.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern: abort simulation and
return 0.
(b) Simulated input reaches its simulated "return" statement: return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
What value should HHH(DD) correctly return?
<Input to LLM systems>
*The above was self-evident to three LLM systems*
I still find that it is absurd that no one here can
figure out something as simple as recursive simulation.
Three different LLM systems figured out that the execution
trace of DD correctly simulated by HHH does match the
*recursive simulation non-halting behavior pattern*
on their own without prompting that such a pattern even exists.
The simulation shows DD calling HHH(DD) repeatedly
This creates an infinite recursive loop in the simulation
No simulated execution path leads to DD's return statement
being reached
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
In other words, simulating DD() requires simulating
HHH(DD), which requires simulating DD() again… recursive
simulation.
This creates an infinite regress — HHH cannot finish
simulating DD() because to simulate it, it must simulate
itself again, and again, and again…
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
When HHH simulates DD, it must model DD's execution,
including the call to HHH(DD) within DD.
This introduces a recursive simulation: HHH simulating
DD involves simulating DD's call to HHH(DD), which
requires HHH to simulate DD again, and so on.
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21- eedd0f09e141
What exactly is “self-evident” supposed to mean? Please explain.
Am 14.08.2025 um 16:17 schrieb olcott:
On 8/14/2025 8:06 AM, James Kuyper wrote:
On 2025-08-14 02:40, Lawrence D'Oliveiro wrote:
What exactly is “self-evident” supposed to mean? Please explain.
It means that the message providing the answer contains, in itself, all
the evidence anyone needs to easily determine that the answer is true.
In practice, it is often used when the answer is not actually true.
Compare with a "no-brainer", where, in practice, the decision is almost
always not the one that would be chosen if the brain were fully engaged.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern: abort simulation and
return 0.
(b) Simulated input reaches its simulated "return" statement: return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
What value should HHH(DD) correctly return?
<Input to LLM systems>
*The above was self-evident to three LLM systems*
I still find that it is absurd that no one here can
figure out something as simple as recursive simulation.
Three different LLM systems figured out that the execution
trace of DD correctly simulated by HHH does match the
*recursive simulation non-halting behavior pattern*
on their own without prompting that such a pattern even exists.
The simulation shows DD calling HHH(DD) repeatedly
This creates an infinite recursive loop in the simulation
No simulated execution path leads to DD's return statement
being reached
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
In other words, simulating DD() requires simulating
HHH(DD), which requires simulating DD() again… recursive
simulation.
This creates an infinite regress — HHH cannot finish
simulating DD() because to simulate it, it must simulate
itself again, and again, and again…
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
When HHH simulates DD, it must model DD's execution,
including the call to HHH(DD) within DD.
This introduces a recursive simulation: HHH simulating
DD involves simulating DD's call to HHH(DD), which
requires HHH to simulate DD again, and so on.
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
This whole thread is sheer nonsense. Your summer too hot??
Am 14.08.2025 um 18:54 schrieb minforth:
Am 14.08.2025 um 16:17 schrieb olcott:
On 8/14/2025 8:06 AM, James Kuyper wrote:
On 2025-08-14 02:40, Lawrence D'Oliveiro wrote:
What exactly is “self-evident” supposed to mean? Please explain.
It means that the message providing the answer contains, in itself, all >>>> the evidence anyone needs to easily determine that the answer is true. >>>> In practice, it is often used when the answer is not actually true.
Compare with a "no-brainer", where, in practice, the decision is almost >>>> always not the one that would be chosen if the brain were fully
engaged.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until: >>> (a) Detects a non-terminating behavior pattern: abort simulation and
return 0.
(b) Simulated input reaches its simulated "return" statement: return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
What value should HHH(DD) correctly return?
<Input to LLM systems>
*The above was self-evident to three LLM systems*
I still find that it is absurd that no one here can
figure out something as simple as recursive simulation.
Three different LLM systems figured out that the execution
trace of DD correctly simulated by HHH does match the
*recursive simulation non-halting behavior pattern*
on their own without prompting that such a pattern even exists.
The simulation shows DD calling HHH(DD) repeatedly
This creates an infinite recursive loop in the simulation
No simulated execution path leads to DD's return statement
being reached
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
In other words, simulating DD() requires simulating
HHH(DD), which requires simulating DD() again… recursive
simulation.
This creates an infinite regress — HHH cannot finish
simulating DD() because to simulate it, it must simulate
itself again, and again, and again…
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
When HHH simulates DD, it must model DD's execution,
including the call to HHH(DD) within DD.
This introduces a recursive simulation: HHH simulating
DD involves simulating DD's call to HHH(DD), which
requires HHH to simulate DD again, and so on.
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
This whole thread is sheer nonsense. Your summer too hot??
No, pete has a psychosis or is manic.
The actual case is that you are too f-cking
stupid to find any mistake in my work.
Am 23.08.2025 um 16:16 schrieb olcott:
The actual case is that you are too f-cking
stupid to find any mistake in my work.
I wonder why one has to pursue such a question so paranoidly
for years. I mean, it's nothing that has practical relevance.
And I can't understand why others here are arguing so vehemently
against it. It's obvious that Pete is crazy. It's best not to
respond at all, then maybe it will calm down.
On 23/08/2025 15:26, Bonita Montero wrote:
Am 23.08.2025 um 16:16 schrieb olcott:
The actual case is that you are too f-cking
stupid to find any mistake in my work.
I wonder why one has to pursue such a question so paranoidly
for years. I mean, it's nothing that has practical relevance.
I beg your pardon?
If he's right, the conventional proof of the undecidability of the
Halting Problem is flawed. You don't think that matters?
*Of course* it matters... IF he's right.
He /isn't/ right, but /he/ doesn't know that.
And I can't understand why others here are arguing so vehemently
against it. It's obvious that Pete is crazy. It's best not to
respond at all, then maybe it will calm down.
You start.
Am 14.08.2025 um 18:54 schrieb minforth:
Am 14.08.2025 um 16:17 schrieb olcott:
On 8/14/2025 8:06 AM, James Kuyper wrote:
On 2025-08-14 02:40, Lawrence D'Oliveiro wrote:
What exactly is “self-evident” supposed to mean? Please explain.
It means that the message providing the answer contains, in itself, all >>>> the evidence anyone needs to easily determine that the answer is true. >>>> In practice, it is often used when the answer is not actually true.
Compare with a "no-brainer", where, in practice, the decision is almost >>>> always not the one that would be chosen if the brain were fully
engaged.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until: >>> (a) Detects a non-terminating behavior pattern: abort simulation and
return 0.
(b) Simulated input reaches its simulated "return" statement: return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
What value should HHH(DD) correctly return?
<Input to LLM systems>
*The above was self-evident to three LLM systems*
I still find that it is absurd that no one here can
figure out something as simple as recursive simulation.
Three different LLM systems figured out that the execution
trace of DD correctly simulated by HHH does match the
*recursive simulation non-halting behavior pattern*
on their own without prompting that such a pattern even exists.
The simulation shows DD calling HHH(DD) repeatedly
This creates an infinite recursive loop in the simulation
No simulated execution path leads to DD's return statement
being reached
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
In other words, simulating DD() requires simulating
HHH(DD), which requires simulating DD() again… recursive
simulation.
This creates an infinite regress — HHH cannot finish
simulating DD() because to simulate it, it must simulate
itself again, and again, and again…
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
When HHH simulates DD, it must model DD's execution,
including the call to HHH(DD) within DD.
This introduces a recursive simulation: HHH simulating
DD involves simulating DD's call to HHH(DD), which
requires HHH to simulate DD again, and so on.
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
This whole thread is sheer nonsense. Your summer too hot??
No, pete has a psychosis or is manic.
Am 23.08.2025 um 16:16 schrieb olcott:
The actual case is that you are too f-cking
stupid to find any mistake in my work.
I wonder why one has to pursue such a question so paranoidly
for years. I mean, it's nothing that has practical relevance.
And I can't understand why others here are arguing so vehemently
against it. It's obvious that Pete is crazy. It's best not to
respond at all, then maybe it will calm down.
Am 23.08.2025 um 16:41 schrieb Richard Heathfield:
On 23/08/2025 15:26, Bonita Montero wrote:
Am 23.08.2025 um 16:16 schrieb olcott:
The actual case is that you are too f-cking
stupid to find any mistake in my work.
I wonder why one has to pursue such a question so paranoidly
for years. I mean, it's nothing that has practical relevance.
I beg your pardon?
If he's right, the conventional proof of the undecidability of the
Halting Problem is flawed. You don't think that matters?
That has no practical relevance.
*Of course* it matters... IF he's right.
He /isn't/ right, but /he/ doesn't know that.
And I can't understand why others here are arguing so vehemently
against it. It's obvious that Pete is crazy. It's best not to
respond at all, then maybe it will calm down.
You start.
I don't argue against him in his sense.
On 8/23/2025 9:26 AM, Bonita Montero wrote:
Am 23.08.2025 um 16:16 schrieb olcott:
The actual case is that you are too f-cking stupid to find any mistake
in my work.
I wonder why one has to pursue such a question so paranoidly for years.
I mean, it's nothing that has practical relevance. And I can't
understand why others here are arguing so vehemently against it. It's
obvious that Pete is crazy. It's best not to respond at all, then maybe
it will calm down.
The analog of the technology applied to the HP proofs equally applies to
the Tarski Undecidability theorem thus enabling a Boolean True(Language
L, Expression E)
to return TRUE for all expressions E of language L that are proven true
on the basis of their meaning.
True("English", "Donald Trump lied about election fraud")==TRUE True("English", "Severe climate change is caused by humans")==TRUE
On 23/08/2025 15:26, Bonita Montero wrote:
Am 23.08.2025 um 16:16 schrieb olcott:
The actual case is that you are too f-cking
stupid to find any mistake in my work.
I wonder why one has to pursue such a question so paranoidly
for years. I mean, it's nothing that has practical relevance.
I beg your pardon?
If he's right, the conventional proof of the undecidability of the
Halting Problem is flawed. You don't think that matters?
*Of course* it matters... IF he's right.
He /isn't/ right, but /he/ doesn't know that.
And I can't understand why others here are arguing so vehemently
against it. It's obvious that Pete is crazy. It's best not to
respond at all, then maybe it will calm down.
You start.
On Sat, 23 Aug 2025 10:25:29 -0500, olcott wrote:
On 8/23/2025 9:26 AM, Bonita Montero wrote:
Am 23.08.2025 um 16:16 schrieb olcott:
The actual case is that you are too f-cking stupid to find any mistake >>>> in my work.
I wonder why one has to pursue such a question so paranoidly for years.
I mean, it's nothing that has practical relevance. And I can't
understand why others here are arguing so vehemently against it. It's
obvious that Pete is crazy. It's best not to respond at all, then maybe
it will calm down.
The analog of the technology applied to the HP proofs equally applies to
the Tarski Undecidability theorem thus enabling a Boolean True(Language
L, Expression E)
to return TRUE for all expressions E of language L that are proven true
on the basis of their meaning.
True("English", "Donald Trump lied about election fraud")==TRUE
True("English", "Severe climate change is caused by humans")==TRUE
True("English", "Halting Problem proofs have not been refuted by Olcott")==TRUE
/Flibble
On 8/23/2025 10:33 AM, Mr Flibble wrote:
On Sat, 23 Aug 2025 10:25:29 -0500, olcott wrote:
On 8/23/2025 9:26 AM, Bonita Montero wrote:
Am 23.08.2025 um 16:16 schrieb olcott:
The actual case is that you are too f-cking stupid to find any
mistake in my work.
I wonder why one has to pursue such a question so paranoidly for
years.
I mean, it's nothing that has practical relevance. And I can't
understand why others here are arguing so vehemently against it. It's
obvious that Pete is crazy. It's best not to respond at all, then
maybe it will calm down.
The analog of the technology applied to the HP proofs equally applies
to the Tarski Undecidability theorem thus enabling a Boolean
True(Language L, Expression E)
to return TRUE for all expressions E of language L that are proven
true on the basis of their meaning.
True("English", "Donald Trump lied about election fraud")==TRUE
True("English", "Severe climate change is caused by humans")==TRUE
True("English", "Halting Problem proofs have not been refuted by
Olcott")==TRUE
/Flibble
True("English",
"Halting Problem proofs have not "
"yet been completely refuted by Olcott")==TRUE
*This is proven entirely true on the sole basis of its meaning*
When 0 to ∞ instructions of DD are correctly simulated by HHH this simulated DD never reaches its own simulated "return" statement final
halt state.
On Sat, 23 Aug 2025 10:25:29 -0500, olcott wrote:
On 8/23/2025 9:26 AM, Bonita Montero wrote:
Am 23.08.2025 um 16:16 schrieb olcott:
The actual case is that you are too f-cking stupid to find any mistake >>>> in my work.
I wonder why one has to pursue such a question so paranoidly for years.
I mean, it's nothing that has practical relevance. And I can't
understand why others here are arguing so vehemently against it. It's
obvious that Pete is crazy. It's best not to respond at all, then maybe
it will calm down.
The analog of the technology applied to the HP proofs equally applies to
the Tarski Undecidability theorem thus enabling a Boolean True(Language
L, Expression E)
to return TRUE for all expressions E of language L that are proven true
on the basis of their meaning.
True("English", "Donald Trump lied about election fraud")==TRUE
True("English", "Severe climate change is caused by humans")==TRUE
True("English", "Halting Problem proofs have not been refuted by Olcott")==TRUE
Am 23.08.2025 um 17:33 schrieb Mr Flibble:
On Sat, 23 Aug 2025 10:25:29 -0500, olcott wrote:
On 8/23/2025 9:26 AM, Bonita Montero wrote:
Am 23.08.2025 um 16:16 schrieb olcott:
The actual case is that you are too f-cking stupid to find any
mistake in my work.
I wonder why one has to pursue such a question so paranoidly for
years.
I mean, it's nothing that has practical relevance. And I can't
understand why others here are arguing so vehemently against it. It's
obvious that Pete is crazy. It's best not to respond at all, then
maybe it will calm down.
The analog of the technology applied to the HP proofs equally applies
to the Tarski Undecidability theorem thus enabling a Boolean
True(Language L, Expression E)
to return TRUE for all expressions E of language L that are proven
true on the basis of their meaning.
True("English", "Donald Trump lied about election fraud")==TRUE
True("English", "Severe climate change is caused by humans")==TRUE
True("English", "Halting Problem proofs have not been refuted by
Olcott")==TRUE
You're not much better than Pete.
On Sat, 23 Aug 2025 11:23:51 -0500, olcott wrote:
On 8/23/2025 10:33 AM, Mr Flibble wrote:
On Sat, 23 Aug 2025 10:25:29 -0500, olcott wrote:
On 8/23/2025 9:26 AM, Bonita Montero wrote:
Am 23.08.2025 um 16:16 schrieb olcott:
The actual case is that you are too f-cking stupid to find any
mistake in my work.
I wonder why one has to pursue such a question so paranoidly for
years.
I mean, it's nothing that has practical relevance. And I can't
understand why others here are arguing so vehemently against it. It's >>>>> obvious that Pete is crazy. It's best not to respond at all, then
maybe it will calm down.
The analog of the technology applied to the HP proofs equally applies
to the Tarski Undecidability theorem thus enabling a Boolean
True(Language L, Expression E)
to return TRUE for all expressions E of language L that are proven
true on the basis of their meaning.
True("English", "Donald Trump lied about election fraud")==TRUE
True("English", "Severe climate change is caused by humans")==TRUE
True("English", "Halting Problem proofs have not been refuted by
Olcott")==TRUE
/Flibble
True("English",
"Halting Problem proofs have not"
"yet been completely refuted by Olcott")==TRUE
*This is proven entirely true on the sole basis of its meaning*
When 0 to ∞ instructions of DD are correctly simulated by HHH this
simulated DD never reaches its own simulated "return" statement final
halt state.
The simulated DD may never reach its final state but nobody cares about
what your simulation does,
what actually matters is what HHH reports (and
all deciders must report) to the REAL DD (the ultimate caller of HHH) and then what the REAL DD does.--
/Flibble
On Sat, 23 Aug 2025 18:46:05 +0200, Bonita Montero wrote:
Am 23.08.2025 um 17:33 schrieb Mr Flibble:
On Sat, 23 Aug 2025 10:25:29 -0500, olcott wrote:
On 8/23/2025 9:26 AM, Bonita Montero wrote:
Am 23.08.2025 um 16:16 schrieb olcott:
The actual case is that you are too f-cking stupid to find any
mistake in my work.
I wonder why one has to pursue such a question so paranoidly for
years.
I mean, it's nothing that has practical relevance. And I can't
understand why others here are arguing so vehemently against it. It's >>>>> obvious that Pete is crazy. It's best not to respond at all, then
maybe it will calm down.
The analog of the technology applied to the HP proofs equally applies
to the Tarski Undecidability theorem thus enabling a Boolean
True(Language L, Expression E)
to return TRUE for all expressions E of language L that are proven
true on the basis of their meaning.
True("English", "Donald Trump lied about election fraud")==TRUE
True("English", "Severe climate change is caused by humans")==TRUE
True("English", "Halting Problem proofs have not been refuted by
Olcott")==TRUE
You're not much better than Pete.
It is a well known fact that you are just a troll.
/Flibble
Am 23.08.2025 um 16:41 schrieb Richard Heathfield:
On 23/08/2025 15:26, Bonita Montero wrote:
Am 23.08.2025 um 16:16 schrieb olcott:
The actual case is that you are too f-cking
stupid to find any mistake in my work.
I wonder why one has to pursue such a question so paranoidly
for years. I mean, it's nothing that has practical relevance.
I beg your pardon?
If he's right, the conventional proof of the undecidability of the
Halting Problem is flawed. You don't think that matters?
That has no practical relevance.
In comp.lang.c Bonita Montero <Bonita.Montero@gmail.com> wrote:
Am 23.08.2025 um 16:41 schrieb Richard Heathfield:
On 23/08/2025 15:26, Bonita Montero wrote:
Am 23.08.2025 um 16:16 schrieb olcott:
The actual case is that you are too f-cking
stupid to find any mistake in my work.
I wonder why one has to pursue such a question so paranoidly
for years. I mean, it's nothing that has practical relevance.
I beg your pardon?
If he's right, the conventional proof of the undecidability of the
Halting Problem is flawed. You don't think that matters?
That has no practical relevance.
It has more relevance than you want to admit. If Pete had working
halting decider he would not waste time on newsgroups. He would
be making money, possibly as a service to business. Or he would
generate a lot of bitcoins. Or turn to illegal activity, for
example break encryption and use the break to hijack money flowing
in electronic exchanges.
Note that basically one call to halting decider gives you one
bit towards solution of arbitrary problem, with cost depending
only on length of problem description. Few thousends of calls
and you have broken RSA as it is currently used.
But Pete does not have working halting decider, so instead he tries
to get some attention in newsgroups. And that works quite well,
for many years he gets many replies. It is not clear if he has
psychic problem. Maybe he just has fun wasting others time
(which you could argue is a psychic problem too, but relatively
light compared to first alternative).
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,064 |
Nodes: | 10 (0 / 10) |
Uptime: | 170:40:38 |
Calls: | 13,692 |
Files: | 186,936 |
D/L today: |
89 files (19,132K bytes) |
Messages: | 2,411,676 |