Professor Sipser has agreed that this is the correct criteria:
If simulating halt decider H correctly simulates its input
D until H correctly determines that its simulated D would
never stop running unless aborted then H can abort its
simulation of D and correctly report that D specifies a
non-halting sequence of configurations.
I don't think that is the shell game. PO really /has/ an H (it's
trivial to do for this one case) that correctly determines that P(P)
*would* never stop running *unless* aborted. He knows and accepts that
P(P) actually does stop. The wrong answer is justified by what would
happen if H (and hence a different P) where not what they actually are.
My 28 year goal has been to make
"true on the basis of meaning expressed in language"
reliably computable.
This required establishing a new foundation
for correct reasoning.
On 12/9/25 10:15 PM, polcott wrote:
My 28 year goal has been to make
"true on the basis of meaning expressed in language"
reliably computable.
This required establishing a new foundation
for correct reasoning.
So DO it.
But first you need to know what those mean.
And, you need to accept that words have actual meaning, and thus you
can't change that meaning in the system they were defined in.
On 12/9/2025 10:04 PM, Richard Damon wrote:
On 12/9/25 10:15 PM, polcott wrote:
My 28 year goal has been to make
"true on the basis of meaning expressed in language"
reliably computable.
This required establishing a new foundation
for correct reasoning.
So DO it.
But first you need to know what those mean.
And, you need to accept that words have actual meaning, and thus you
can't change that meaning in the system they were defined in.
Two different LLMs agreed that I have defined
a new architecture that solves all of the issues
for making true computable.
On 12/10/25 12:07 AM, olcott wrote:
On 12/9/2025 10:04 PM, Richard Damon wrote:
On 12/9/25 10:15 PM, polcott wrote:
My 28 year goal has been to make
"true on the basis of meaning expressed in language"
reliably computable.
This required establishing a new foundation
for correct reasoning.
So DO it.
But first you need to know what those mean.
And, you need to accept that words have actual meaning, and thus you
can't change that meaning in the system they were defined in.
Two different LLMs agreed that I have defined
a new architecture that solves all of the issues
for making true computable.
LLM's are proven liars, and are just yes man.
On 12/10/2025 5:59 AM, Richard Damon wrote:
On 12/10/25 12:07 AM, olcott wrote:
On 12/9/2025 10:04 PM, Richard Damon wrote:
On 12/9/25 10:15 PM, polcott wrote:
My 28 year goal has been to make
"true on the basis of meaning expressed in language"
reliably computable.
This required establishing a new foundation
for correct reasoning.
So DO it.
But first you need to know what those mean.
And, you need to accept that words have actual meaning, and thus you
can't change that meaning in the system they were defined in.
Two different LLMs agreed that I have defined
a new architecture that solves all of the issues
for making true computable.
LLM's are proven liars, and are just yes man.
If that was true then you could find at least
one mistake in any of their final conclusions
of their assessment of my work.
There is no need to actually trust LLMs when
you can see that they are using sound semantic
entailment from verified facts and standard
definitions.
LLMs have proven to be at least 1000-fold better
reviewers for at least one key reason.
They never ever endlessly pretend to not understand
one simple little thing for the sole purpose of
remaining consistently disagreeable.
| Sysop: | DaiTengu |
|---|---|
| Location: | Appleton, WI |
| Users: | 1,090 |
| Nodes: | 10 (0 / 10) |
| Uptime: | 156:55:46 |
| Calls: | 13,922 |
| Calls today: | 3 |
| Files: | 187,021 |
| D/L today: |
4,131 files (1,056M bytes) |
| Messages: | 2,457,227 |