Am 25.11.2025 um 16:47 schrieb olcott:
On 11/25/2025 9:20 AM, Bonita Montero wrote:
What you do is like thinking in circles before falling asleep.
It never ends. You're gonna die with that for sure sooner or later.
I now have four different LLM AI models that prove
that I am correct on the basis that they derive the
proof steps that prove that I am correct.
It don't matters if you're correct. There's no benefit in discussing
such a theoretical topic for years. You won't even stop if everyone
tells you're right.
Even Kimi that was dead set against me now fully
understands my new formal foundation for correct
reasoning.
On 11/25/2025 9:50 AM, Bonita Montero wrote:
Am 25.11.2025 um 16:47 schrieb olcott:
On 11/25/2025 9:20 AM, Bonita Montero wrote:
What you do is like thinking in circles before falling asleep.
It never ends. You're gonna die with that for sure sooner or later.
I now have four different LLM AI models that prove
that I am correct on the basis that they derive the
proof steps that prove that I am correct.
It don't matters if you're correct. There's no benefit in discussing
such a theoretical topic for years. You won't even stop if everyone
tells you're right.
My whole purpose of this has been to establish a
new foundation for correct reasoning that gets rid
The timing for such a system is perfect because it
could solve the LLM AI reliability issues.
| Sysop: | DaiTengu |
|---|---|
| Location: | Appleton, WI |
| Users: | 1,089 |
| Nodes: | 10 (0 / 10) |
| Uptime: | 153:45:20 |
| Calls: | 13,921 |
| Calls today: | 2 |
| Files: | 187,021 |
| D/L today: |
3,744 files (941M bytes) |
| Messages: | 2,457,161 |