On 20/08/2025 09:54, Fred. Zwarts wrote:
Op 19.aug.2025 om 23:55 schreef olcott:
On 8/19/2025 3:52 PM, Chris M. Thomasson wrote:
On 8/19/2025 7:34 AM, Richard Heathfield wrote:
On 19/08/2025 15:26, olcott wrote:
Once HHH(DD) has aborted its simulation none of
the recursive simulations has any behavior.
And that's your bug.
DD is a C function. It would be easy to put a direct call to it
into main and compile it into a program. Its behaviour then departs >>>>> from your simulation. Ergo, your simulation is incorrect.
Perhaps his simulator should just take C source code as input,
compile it and run/simulate it in a sandbox via an entry point, say
main? In this scenario, his system would require a compiler.
I was thinking of something like that.
All of the LLM systems can figure out on their
own that HHH(DD) would correctly return 0 for non-halting.
A static analysis of the source-code is what they are doing.
All LLM systems make this error, because you fed them incorrect input.
They are not smart enough to analyse the input and see the
contradictions and errors in it.
Actually, they may be. They're just reluctant to call Olcott out.
A while back I gave ChatGPT the same input without the lies, and it correctly deduced that HHH cannot return a "correct" value.
On 8/20/2025 4:42 AM, Richard Heathfield wrote:
On 20/08/2025 09:54, Fred. Zwarts wrote:
Op 19.aug.2025 om 23:55 schreef olcott:
On 8/19/2025 3:52 PM, Chris M. Thomasson wrote:
On 8/19/2025 7:34 AM, Richard Heathfield wrote:
On 19/08/2025 15:26, olcott wrote:
Once HHH(DD) has aborted its simulation none of
the recursive simulations has any behavior.
And that's your bug.
DD is a C function. It would be easy to put a direct call to it
into main and compile it into a program. Its behaviour then
departs from your simulation. Ergo, your simulation is incorrect.
Perhaps his simulator should just take C source code as input,
compile it and run/simulate it in a sandbox via an entry point, say >>>>> main? In this scenario, his system would require a compiler.
I was thinking of something like that.
All of the LLM systems can figure out on their
own that HHH(DD) would correctly return 0 for non-halting.
A static analysis of the source-code is what they are doing.
All LLM systems make this error, because you fed them incorrect
input. They are not smart enough to analyse the input and see the
contradictions and errors in it.
Actually, they may be. They're just reluctant to call Olcott out.
A while back I gave ChatGPT the same input without the lies, and it
correctly deduced that HHH cannot return a "correct" value.
ChatGPT 5.0 has started guessing instead of thinking
things through. I asked it to show me how I can force
it to think things through and it wrote this script
for me. With this script both ChatGPT 5.0 and Gemini
get the correct answer. It says that this is a standard
template that can be used across all LLM systems.
<Rubric>
Rules of Simulation:
- (a) If a non-terminating behavior pattern is detected during
simulation, abort immediately and return 0.
- (b) If the simulated program reaches a return statement without
earlier detection, return 1.
- Priority: (a) always takes precedence over (b).
Procedure:
1. Restate rules before simulation.
2. Simulate step by step using a trace table with columns:
| Step | Code Location | State (variables, call stack) | Event / Action | Rule Applied |
3. Stop immediately if recursion, infinite loop, or repeating state
occurs → apply (a).
4. If a return is reached first → apply (b).
5. Conclude with:
- Which rule fired first
- The correct return value
6. Optionally, provide:
- (i) Simulation result under rules
- (ii) Actual runtime behavior in C
</Rubric>
<Test Program>
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
</Test Program>
<Required Output Format>
1. Restatement of rules
2. Step-by-step execution trace in a table
3. Identification of first rule triggered
4. Final result: return value of HHH(program)
5. Optional: actual C runtime behavior if relevant
</Required Output Format>
</Benchmark Task>
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,064 |
Nodes: | 10 (0 / 10) |
Uptime: | 150:04:26 |
Calls: | 13,691 |
Calls today: | 1 |
Files: | 186,936 |
D/L today: |
438 files (115M bytes) |
Messages: | 2,410,972 |