Rejecting expressions of formal language
having pathological self-reference
Explained how expressions with pathological self
reference can simply be rejected as semantically/
syntactically unsound thus preventing undefinability,
and undecidability.
This sentence is not true: "This sentence is not true"
is true only because the inner sentence is semantically
unsound. The inner sentence is formalized in Minimal
Type Theory as LP := ~True(LP).
(where A := B means A is defined as B).
https://philpapers.org/rec/OLCREO
Can someone review my actual reasoning
elaborated in the paper?
The huge advantages of LLM systems is that they do not
begin their review on the basis that [Olcott is wrong]
is an axiom. No humans have ever been able to do this
in thousands of reviews across dozens of forums.
----
Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer
The huge advantages of LLM systems is that they do not
begin their review on the basis that [Olcott is wrong]
is an axiom. No humans have ever been able to do this
in thousands of reviews across dozens of forums.
On 2025-11-12, olcott <polcott333@gmail.com> wrote:
The huge advantages of LLM systems is that they do not
begin their review on the basis that [Olcott is wrong]
is an axiom. No humans have ever been able to do this
in thousands of reviews across dozens of forums.
Your principal modus operandi is that you reject any piece of evidence
which contradicts your set view in any matter.
Here you are also doing it again: completely overlooking all the times someone has agreed with you in some point, and declaring that that
everyon has a /personal/ bias against you.
Anyway, that /is/ the right approach toward anything. Anything anyone
says should be suspected of having a factual or logical flaw, until
proven otherwise.
Here is an idea for you: maybe try being right 95% of the time, for a
while. Say two weeks, or a month. Instead of your usual, 95%+ wrong.
There is a human bias at play here and I will explain it to you:
people are more motivated to respond when you are wrong.
If you're 80% wrong, the 0.8 fraction of your remarks that is
wrong will get more engagement than the 0.2 that are right.
Thus it might be that, among those of your remarks which fetch
engagement, the fraction which are wrong might be amplified to
something much higher, like 0.97.
This is why social networking algorithms are rigged to spread
rage bait: to drum up engagement.
It's amazing you don't have the maturity to know all this on your own;
that it has to be explained to a grown up.
Most of us here struggle not to say an incorrect thing. in a comp.*
newsgroup or elsewhere, yet here you practically made a sport out of it;
you say some wrong shit more times in a day than an NBA player takes a
shot at the hoop.
https://philpapers.org/rec/OLCREO
On 2025-11-12, olcott <polcott333@gmail.com> wrote:
https://philpapers.org/rec/OLCREO
This garbage doesn't even feign an attempt at hiding that it's a fucking
chat session with Claude AI.
On 11/12/2025 12:39 PM, Kaz Kylheku wrote:
On 2025-11-12, olcott <polcott333@gmail.com> wrote:
The huge advantages of LLM systems is that they do not
begin their review on the basis that [Olcott is wrong]
is an axiom. No humans have ever been able to do this
in thousands of reviews across dozens of forums.
Your principal modus operandi is that you reject any piece of evidence
which contradicts your set view in any matter.
Not at all. Not ever.
Here you are also doing it again: completely overlooking all the times
someone has agreed with you in some point, and declaring that that
everyon has a /personal/ bias against you.
People on these forums have only agreed with
me at most 1% of the time and the only case
besides Ben the agreement was on trivialities.
You are still trying to get away with the utter
nonsense that once correct non-termination
behavior criteria have been correctly met
that you can try again and get a different
result. I would really like to think that
you are not a damned liar, yet no alternative
seems reasonably plausible.
Anyway, that /is/ the right approach toward anything. Anything anyone
says should be suspected of having a factual or logical flaw, until
proven otherwise.
Not when the basis of proof requires them
to actually pay close attention when they
are utterly unwilling to do this because they
are so sure that I must be wrong.
Here is an idea for you: maybe try being right 95% of the time, for a
while. Say two weeks, or a month. Instead of your usual, 95%+ wrong.
I have been completely right on the essence of
what I have been saying for 22 years.
There is a human bias at play here and I will explain it to you:
people are more motivated to respond when you are wrong.
OK some honesty, that it refreshing.
If you're 80% wrong, the 0.8 fraction of your remarks that is
wrong will get more engagement than the 0.2 that are right.
Thus it might be that, among those of your remarks which fetch
engagement, the fraction which are wrong might be amplified to
something much higher, like 0.97.
This is why social networking algorithms are rigged to spread
rage bait: to drum up engagement.
It's amazing you don't have the maturity to know all this on your own;
that it has to be explained to a grown up.
It is a verified fact that I have been continually
Now I have LLM systems that show the complete details
of exactly how and why I am correct. If they were
simply "yes men" they could not possibly do this.
Most of us here struggle not to say an incorrect thing. in a comp.*
newsgroup or elsewhere, yet here you practically made a sport out of it;
you say some wrong shit more times in a day than an NBA player takes a
shot at the hoop.
I say things that do not conform to conventional
wisdom and people here don't even understand the
reasoning behind conventional wisdom.
When I point out the error in this reasoning people
here are utterly helpless. The most they can do is
say that I must be wrong entirely on the basis
that I contradict conventional wisdom.
On 11/12/2025 8:22 PM, Kaz Kylheku wrote:
On 2025-11-12, olcott <polcott333@gmail.com> wrote:
https://philpapers.org/rec/OLCREO
This garbage doesn't even feign an attempt at hiding that it's a fucking
chat session with Claude AI.
Of course I don't. It proves that I am
correct, try and find an actual error.
I had ChatGPT look at this too.
On 2025-11-12, olcott <polcott333@gmail.com> wrote:
On 11/12/2025 12:39 PM, Kaz Kylheku wrote:
On 2025-11-12, olcott <polcott333@gmail.com> wrote:
The huge advantages of LLM systems is that they do not
begin their review on the basis that [Olcott is wrong]
is an axiom. No humans have ever been able to do this
in thousands of reviews across dozens of forums.
Your principal modus operandi is that you reject any piece of evidence
which contradicts your set view in any matter.
Not at all. Not ever.
Even now, in this post.
Here you are also doing it again: completely overlooking all the times
someone has agreed with you in some point, and declaring that that
everyon has a /personal/ bias against you.
People on these forums have only agreed with
me at most 1% of the time and the only case
1% is much larger than zero.
besides Ben the agreement was on trivialities.
Grasping trivialities is the extent of your skill, so that's
all you get.
On 11/12/2025 8:36 PM, Kaz Kylheku wrote:
On 2025-11-12, olcott <polcott333@gmail.com> wrote:
On 11/12/2025 12:39 PM, Kaz Kylheku wrote:
On 2025-11-12, olcott <polcott333@gmail.com> wrote:
The huge advantages of LLM systems is that they do not
begin their review on the basis that [Olcott is wrong]
is an axiom. No humans have ever been able to do this
in thousands of reviews across dozens of forums.
Your principal modus operandi is that you reject any piece of evidence >>>> which contradicts your set view in any matter.
Not at all. Not ever.
Even now, in this post.
Here you are also doing it again: completely overlooking all the times >>>> someone has agreed with you in some point, and declaring that that
everyon has a /personal/ bias against you.
People on these forums have only agreed with
me at most 1% of the time and the only case
1% is much larger than zero.
besides Ben the agreement was on trivialities.
Grasping trivialities is the extent of your skill, so that's
all you get.
You are not following the actual reasoning of the
paper. You leap to the conclusion that I am wrong.
That is not you pointing out an error.
You don't even know what a cycle in the directed
graph of the evaluation sequence of an expression
is so you lack any basis to critique this.
(mlet ((z (* 2 x))(y 42)
(mlet ((z (* 2 x))(y (/ z 2))
(set *print-circle* t)t
(mlet ((x (cons 0 y))(y (cons 1 x)))
(mlet ((x (lcons 0 y))(y (lcons 1 x)))
(set *print-circle* nil)nil
(mlet ((x (lcons 0 y))(y (lcons 1 x)))
(mlet ((x (lcons 0 y))(y (lcons 1 x)))
On 2025-11-13, olcott <polcott333@gmail.com> wrote:
On 11/12/2025 8:36 PM, Kaz Kylheku wrote:
On 2025-11-12, olcott <polcott333@gmail.com> wrote:
On 11/12/2025 12:39 PM, Kaz Kylheku wrote:
On 2025-11-12, olcott <polcott333@gmail.com> wrote:
The huge advantages of LLM systems is that they do not
begin their review on the basis that [Olcott is wrong]
is an axiom. No humans have ever been able to do this
in thousands of reviews across dozens of forums.
Your principal modus operandi is that you reject any piece of evidence >>>>> which contradicts your set view in any matter.
Not at all. Not ever.
Even now, in this post.
Here you are also doing it again: completely overlooking all the times >>>>> someone has agreed with you in some point, and declaring that that
everyon has a /personal/ bias against you.
People on these forums have only agreed with
me at most 1% of the time and the only case
1% is much larger than zero.
besides Ben the agreement was on trivialities.
Grasping trivialities is the extent of your skill, so that's
all you get.
You are not following the actual reasoning of the
paper. You leap to the conclusion that I am wrong.
That is not you pointing out an error.
That's what you did to my code.
I'm categorically rejecting your paper because it
is AI slop that takes /zero/ effort to generate,
but /nonzero/ effort to go through and validate.
On 2025-11-13, olcott <polcott333@gmail.com> wrote:
On 11/12/2025 8:22 PM, Kaz Kylheku wrote:
On 2025-11-12, olcott <polcott333@gmail.com> wrote:
https://philpapers.org/rec/OLCREO
This garbage doesn't even feign an attempt at hiding that it's a fucking >>> chat session with Claude AI.
Of course I don't. It proves that I am
correct, try and find an actual error.
No CS academic is going to do anything but swipe left.
You will have to do your own thinking and writing if you want to be
taken seriously. Not that that's a guarantee (and we've all seen
what your own writing looks like).
I had ChatGPT look at this too.
Oh well, then, that's an intellectual slam dunk, obviously.
On 2025-11-13, olcott <polcott333@gmail.com> wrote:
I had ChatGPT look at this too.
Oh well, then, that's an intellectual slam dunk, obviously.
On 13/11/2025 02:38, Kaz Kylheku wrote:
On 2025-11-13, olcott <polcott333@gmail.com> wrote:
<snip>
I had ChatGPT look at this too.
Oh well, then, that's an intellectual slam dunk, obviously.
To be fair to ChatGPT, if you present it with Olcott's gibberish
*without* first carefully prepping it with falsehoods like 'correctly simulates', it unhesitatingly identifies the whole thing as a load of gibberish.
On 13/11/2025 02:38, Kaz Kylheku wrote:
On 2025-11-13, olcott <polcott333@gmail.com> wrote:
<snip>
I had ChatGPT look at this too.
Oh well, then, that's an intellectual slam dunk, obviously.
To be fair to ChatGPT, if you present it with Olcott's gibberish
*without* first carefully prepping it with falsehoods like 'correctly simulates', it unhesitatingly identifies the whole thing as a load of gibberish.
On 11/12/2025 9:22 PM, Kaz Kylheku wrote:
On 2025-11-13, olcott <polcott333@gmail.com> wrote:
On 11/12/2025 8:36 PM, Kaz Kylheku wrote:
On 2025-11-12, olcott <polcott333@gmail.com> wrote:
On 11/12/2025 12:39 PM, Kaz Kylheku wrote:
On 2025-11-12, olcott <polcott333@gmail.com> wrote:
The huge advantages of LLM systems is that they do not
begin their review on the basis that [Olcott is wrong]
is an axiom. No humans have ever been able to do this
in thousands of reviews across dozens of forums.
Your principal modus operandi is that you reject any piece of evidence >>>>>> which contradicts your set view in any matter.
Not at all. Not ever.
Even now, in this post.
Here you are also doing it again: completely overlooking all the times >>>>>> someone has agreed with you in some point, and declaring that that >>>>>> everyon has a /personal/ bias against you.
People on these forums have only agreed with
me at most 1% of the time and the only case
1% is much larger than zero.
besides Ben the agreement was on trivialities.
Grasping trivialities is the extent of your skill, so that's
all you get.
You are not following the actual reasoning of the
paper. You leap to the conclusion that I am wrong.
That is not you pointing out an error.
That's what you did to my code.
Your code essentially claims that infinite recursion
stops when you monkey with it.
On 2025-11-13, olcott <polcott333@gmail.com> wrote:
On 11/12/2025 9:22 PM, Kaz Kylheku wrote:
On 2025-11-13, olcott <polcott333@gmail.com> wrote:
On 11/12/2025 8:36 PM, Kaz Kylheku wrote:
On 2025-11-12, olcott <polcott333@gmail.com> wrote:
On 11/12/2025 12:39 PM, Kaz Kylheku wrote:
On 2025-11-12, olcott <polcott333@gmail.com> wrote:
The huge advantages of LLM systems is that they do not
begin their review on the basis that [Olcott is wrong]
is an axiom. No humans have ever been able to do this
in thousands of reviews across dozens of forums.
Your principal modus operandi is that you reject any piece of evidence >>>>>>> which contradicts your set view in any matter.
Not at all. Not ever.
Even now, in this post.
Here you are also doing it again: completely overlooking all the times >>>>>>> someone has agreed with you in some point, and declaring that that >>>>>>> everyon has a /personal/ bias against you.
People on these forums have only agreed with
me at most 1% of the time and the only case
1% is much larger than zero.
besides Ben the agreement was on trivialities.
Grasping trivialities is the extent of your skill, so that's
all you get.
You are not following the actual reasoning of the
paper. You leap to the conclusion that I am wrong.
That is not you pointing out an error.
That's what you did to my code.
Your code essentially claims that infinite recursion
stops when you monkey with it.
You're welcome to point of what exactly you mean by "monkey" and which
lines of code are doing that.
Which bits am I flipping that constitute monkeying?
Remember, the code takes the state of an abandoned simulation
/exactly/ as it was left by HHH (or whichever decider)
And then it steps that simulation forward in exactly the correct way,
the same way that HHH previously stepped it: it passes precisely the
correct slave_state, and other arguments, to DebugStep.
The code does not manipulate the content of slave_state other than
stepping it with DebugStep (your function, the same one used by HHH).
Between the time HHH abandoned the simulation, and the new dcode
starts stepping it again, nothing has touched slave_state or
slave_stack.
So again, what is monkeying and where is it happening?
You've had several weeks to Back up your claim ... and nothing.
On 11/13/2025 2:44 AM, Kaz Kylheku wrote:
Your code essentially claims that infinite recursion
stops when you monkey with it.
You're welcome to point of what exactly you mean by "monkey" and which
lines of code are doing that.
Once D simulated by H correctly matches its correct
non-halting behavior pattern doing anything besides
aborting the simulation and rejecting the input is cheating.
On 11/12/2025 8:45 AM, olcott wrote:
This sentence is not true: "This sentence is not true"
is true only because the inner sentence is semantically
unsound.
The inner sentence is formalized in Minimal
Type Theory as LP := ~True(LP).
(where A := B means A is defined as B).
https://philpapers.org/rec/OLCREO
Can someone review my actual reasoning
elaborated in the paper?
The huge advantages of LLM systems is that they do not
begin their review on the basis that [Olcott is wrong]
is an axiom. No humans have ever been able to do this
in thousands of reviews across dozens of forums.
On 12/11/2025 17:57, olcott wrote:
On 11/12/2025 8:45 AM, olcott wrote:
Noisy:
This sentence is not true: "This sentence is not true"
is true only because the inner sentence is semantically
unsound.
Woah! Because your post provides some meaning for the interpretation of
your paper I think the above needs to be addressed.
We have the name of a sentence "This sentence is not true"
We have a sentence about it:
"This sentence is not true" is true only because the inner sentence is semantically unsound.
And we have a sentence that is constructed like a lambda expression but
using something a bit like a de bruijn reference turned inside out:
This sentence is not true: "This sentence is not true" is true only
because the inner sentence is semantically unsound.
but with a noisy surrounding fluff beta-ish-reducing to:
["This sentence is not true" is true only because the inner sentence is semantically unsound] is not true
in there we still have the foremost interesting
outie-de-bruijn-ish-innie reference "the inner sentence" which I suppose therein refers to the referent of the unique syntactically most
contained nominal phrase that references a sentence, to wit, the
referent of "This sentence" in "This sentence is not true" in the Noisy.
then the whole says it's not true that some purported semantic
unsoundness of what that reference refers to is the sole basis for
inferring some unstated notion of nontruth about the referent of "This sentence" in "This sentence is not true" in the Noisy.
Have I understood what you're saying, minister?
[Why "minister", see https://www.youtube.com/watch?v=qVO85anasrA ]
The inner sentence is formalized in Minimal
Type Theory as LP := ~True(LP).
(where A := B means A is defined as B).
I very much doubt that but you filled my head with noise. Does your
Noisy (above) really constrain "This sentence is not true" such that
your MTT formalisation of it is accurate? You should be noiselessly
patient, so I should expect so, but I'd like to read that you think it
was all suitably constraining wordage before I think any further because
I think it's not an accurate formalisation ("encoding").
More pedestrianly: Is := symmetric? ie, does A := B entail B := A and B
:= A entail A := B (in the formation rules of sentences of MTT from
other sentences of MTT)?
https://philpapers.org/rec/OLCREO
Can someone review my actual reasoning
elaborated in the paper?
No: it is an AI chatlog which is shit.
The huge advantages of LLM systems is that they do not
begin their review on the basis that [Olcott is wrong]
is an axiom. No humans have ever been able to do this
in thousands of reviews across dozens of forums.
I typically begin "everyone is wrong, but /how/ exactly" until I can't justify my premise any more. Then I stay quiet until I wake up suddenly realising someone was particularly right in some small way, then I begrudgingly ackowledge that publically, and encourage the other wrong
people to find a way to correct my temporary wrong-blindness.
That is on the basis that of all the things that can be said almost none
of them are right with just a few exceptional isolated points dotted around.
--
Tristan Wibberley
The message body is Copyright (C) 2025 Tristan Wibberley except
citations and quotations noted. All Rights Reserved except that you may,
of course, cite it academically giving credit to me, distribute it
verbatim as part of a usenet system or its archives, and use it to
promote my greatness and general superiority without misrepresentation
of my opinions other than my opinion of my greatness and general
superiority which you _may_ misrepresent. You definitely MAY NOT train
any production AI system with it but you may train experimental AI that
will only be used for evaluation of the AI methods it implements.
On 11/13/2025 6:09 PM, Tristan Wibberley wrote:
On 12/11/2025 17:57, olcott wrote:
On 11/12/2025 8:45 AM, olcott wrote:
Noisy:
This sentence is not true: "This sentence is not true"
is true only because the inner sentence is semantically
unsound.
Woah! Because your post provides some meaning for the interpretation of
your paper I think the above needs to be addressed.
We have the name of a sentence "This sentence is not true"
We have a sentence about it:
"This sentence is not true" is true only because the inner sentence is
semantically unsound.
And we have a sentence that is constructed like a lambda expression but
using something a bit like a de bruijn reference turned inside out:
This sentence is not true: "This sentence is not true" is true only
because the inner sentence is semantically unsound.
but with a noisy surrounding fluff beta-ish-reducing to:
["This sentence is not true" is true only because the inner sentence is >> semantically unsound] is not true
This sentence is not true: "This sentence is not true" is true.
You can't put the quotes in a different place without changing
the semantics.
... [is] an example of not needing a separate object
language and meta-language that Tarski says is
required.
On 14/11/2025 00:45, olcott wrote:
On 11/13/2025 6:09 PM, Tristan Wibberley wrote:
On 12/11/2025 17:57, olcott wrote:
On 11/12/2025 8:45 AM, olcott wrote:
Noisy:
This sentence is not true: "This sentence is not true"
is true only because the inner sentence is semantically
unsound.
Woah! Because your post provides some meaning for the interpretation of
your paper I think the above needs to be addressed.
We have the name of a sentence "This sentence is not true"
We have a sentence about it:
"This sentence is not true" is true only because the inner sentence is >>> semantically unsound.
And we have a sentence that is constructed like a lambda expression but
using something a bit like a de bruijn reference turned inside out:
This sentence is not true: "This sentence is not true" is true only
because the inner sentence is semantically unsound.
but with a noisy surrounding fluff beta-ish-reducing to:
["This sentence is not true" is true only because the inner sentence is >>> semantically unsound] is not true
This sentence is not true: "This sentence is not true" is true.
You can't put the quotes in a different place without changing
the semantics.
You haven't tried to understand what I wrote. You have just guessed at a response.
--
Tristan Wibberley
The message body is Copyright (C) 2025 Tristan Wibberley except
citations and quotations noted. All Rights Reserved except that you may,
of course, cite it academically giving credit to me, distribute it
verbatim as part of a usenet system or its archives, and use it to
promote my greatness and general superiority without misrepresentation
of my opinions other than my opinion of my greatness and general
superiority which you _may_ misrepresent. You definitely MAY NOT train
any production AI system with it but you may train experimental AI that
will only be used for evaluation of the AI methods it implements.
On 14/11/2025 00:45, olcott wrote:
... [is] an example of not needing a separate object
language and meta-language that Tarski says is
required.
You've got that unqualified. Did Tarski say it unqualified?
--
Tristan Wibberley
The message body is Copyright (C) 2025 Tristan Wibberley except
citations and quotations noted. All Rights Reserved except that you may,
of course, cite it academically giving credit to me, distribute it
verbatim as part of a usenet system or its archives, and use it to
promote my greatness and general superiority without misrepresentation
of my opinions other than my opinion of my greatness and general
superiority which you _may_ misrepresent. You definitely MAY NOT train
any production AI system with it but you may train experimental AI that
will only be used for evaluation of the AI methods it implements.
This sentence is not true: "This sentence is not true" is true
is the essence of the Tarski Undefinability theorem.
On 14/11/2025 02:29, olcott wrote:
This sentence is not true: "This sentence is not true" is true
is the essence of the Tarski Undefinability theorem.
Once again, you've just guessed at a response instead of trying to understand.
--
Tristan Wibberley
The message body is Copyright (C) 2025 Tristan Wibberley except
citations and quotations noted. All Rights Reserved except that you may,
of course, cite it academically giving credit to me, distribute it
verbatim as part of a usenet system or its archives, and use it to
promote my greatness and general superiority without misrepresentation
of my opinions other than my opinion of my greatness and general
superiority which you _may_ misrepresent. You definitely MAY NOT train
any production AI system with it but you may train experimental AI that
will only be used for evaluation of the AI methods it implements.
On 12/11/2025 17:57, olcott wrote:
On 11/12/2025 8:45 AM, olcott wrote:
Noisy:
This sentence is not true: "This sentence is not true"
is true only because the inner sentence is semantically
unsound.
Woah! Because your post provides some meaning for the interpretation of
your paper I think the above needs to be addressed.
We have the name of a sentence "This sentence is not true"
We have a sentence about it:
"This sentence is not true" is true only because the inner sentence is semantically unsound.
And we have a sentence that is constructed like a lambda expression but
using something a bit like a de bruijn reference turned inside out:
This sentence is not true: "This sentence is not true" is true only
because the inner sentence is semantically unsound.
but with a noisy surrounding fluff beta-ish-reducing to:
["This sentence is not true" is true only because the inner sentence is semantically unsound] is not true
in there we still have the foremost interesting
outie-de-bruijn-ish-innie reference "the inner sentence" which I suppose therein refers to the referent of the unique syntactically most
contained nominal phrase that references a sentence, to wit, the
referent of "This sentence" in "This sentence is not true" in the Noisy.
then the whole says it's not true that some purported semantic
unsoundness of what that reference refers to is the sole basis for
inferring some unstated notion of nontruth about the referent of "This sentence" in "This sentence is not true" in the Noisy.
Have I understood what you're saying, minister?
[Why "minister", see https://www.youtube.com/watch?v=qVO85anasrA ]
The inner sentence is formalized in Minimal
Type Theory as LP := ~True(LP).
(where A := B means A is defined as B).
I very much doubt that but you filled my head with noise. Does your
Noisy (above) really constrain "This sentence is not true" such that
your MTT formalisation of it is accurate?
You should be noiselessly
patient, so I should expect so, but I'd like to read that you think it
was all suitably constraining wordage before I think any further because
I think it's not an accurate formalisation ("encoding").
More pedestrianly: Is := symmetric? ie, does A := B entail B := A and B
:= A entail A := B (in the formation rules of sentences of MTT from
other sentences of MTT)?
https://philpapers.org/rec/OLCREO
Can someone review my actual reasoning
elaborated in the paper?
No: it is an AI chatlog which is shit.
The huge advantages of LLM systems is that they do not
begin their review on the basis that [Olcott is wrong]
is an axiom. No humans have ever been able to do this
in thousands of reviews across dozens of forums.
I typically begin "everyone is wrong, but /how/ exactly" until I can't justify my premise any more. Then I stay quiet until I wake up suddenly realising someone was particularly right in some small way, then I begrudgingly ackowledge that publically, and encourage the other wrong
people to find a way to correct my temporary wrong-blindness.
That is on the basis that of all the things that can be said almost none
of them are right with just a few exceptional isolated points dotted around.
--
Tristan Wibberley
The message body is Copyright (C) 2025 Tristan Wibberley except
citations and quotations noted. All Rights Reserved except that you may,
of course, cite it academically giving credit to me, distribute it
verbatim as part of a usenet system or its archives, and use it to
promote my greatness and general superiority without misrepresentation
of my opinions other than my opinion of my greatness and general
superiority which you _may_ misrepresent. You definitely MAY NOT train
any production AI system with it but you may train experimental AI that
will only be used for evaluation of the AI methods it implements.
On 2025-11-14, Alan Mackenzie <acm@muc.de> wrote:
olcott <polcott333@gmail.com> wrote:
99% of experts will reject something that does not conform
to convention wisdom without even looking at it.
They've got better things to do with their time than continually refuting
falsehoods which contradict proven basics.
LLM systems will look at something that does not conform
to conventional wisdom and form their own proof that this
idea is correct showing every detail of every step of this proof.
https://www.researchgate.net/publication/396916355_Halting_Problem_Simulation_in_C
Then why are you posting on Usenet, where people aren't writing what you
want them to write? Why not stick to these LLM sysems which do reply
what you want them to reply?
Because he knows they are bullshit that is programmed to agreew with
the user if the user persists in fighting through pushback.
The early versions of GPT-4 integrated into Microsoft Edge were
better! That was programmed to detect argumentative cranks and
end the conversation.
If was an essential feature that should continue to be implemented
in new LLM chat agents, in spite of more generous token limits.
Even in paid service, for that matter.
If the user is persisting thorugh more than three or four rounds
of factual pushback, "This conversation is not productive; perhaps
I can help you with something else" and that's it.
Cranks like Olcott would get squat all agreement out of that.
Chat AI that talks endlessly and lets itself be overwhelmed
is a public disservice. It's not as egregious as supporting someone
in planning to harm oneself or others, but it's in the same vein.
Agreeing with someone's bullshit after forty rounds is a palpable perpetration of social harm.
People here would much rather assume that they are already
correct than to bother verifying anything.
If you weren't so damned sure that I must be wrong
you could see that.
----
Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer
On 2025-11-15 15:51:45 +0000, olcott said:
On 11/15/2025 3:56 AM, Mikko wrote:
On 2025-11-14 14:33:11 +0000, olcott said:
On 11/14/2025 2:53 AM, Mikko wrote:
On 2025-11-13 16:06:50 +0000, olcott said:
On 11/13/2025 3:18 AM, Mikko wrote:
On 2025-11-12 18:12:44 +0000, Alan Mackenzie said:
[ Followup-To: set ]
In comp.theory olcott <polcott333@gmail.com> wrote:
[ .... ]
The huge advantages of LLM systems is that they do not
begin their review on the basis that [Olcott is wrong]
is an axiom. No humans have ever been able to do this
in thousands of reviews across dozens of forums.
The huge disadvantage of LLM systems is that they begin their >>>>>>>> review on
the basis that Olcott is right. Intelligent people do not do this. >>>>>>>> They evaluate what Olcott has written and pronounce it either >>>>>>>> right or
(much more usually) wrong.
Honest intelligent people don't pronounce anything they don't
have seen
before right. The nearest they can say is "no obvious errors" or >>>>>>> "looks
good" or something that means the same. To actually check something >>>>>>> takes more time and work.
Most people are sheep when they see something that does
not conform to conventional wisdom they reject it.
Syntax error. There are three clauses but it is not clear which words >>>>> belong to which.
It is not a good idea to reject conventional or other wisdom without >>>>> a good reason. Even with a good reason it is not a good idea to reject >>>>> more than what the good reason requires.
99% of experts will reject something that does not conform
to convention wisdom without even looking at it.
How many experts you asked? If you ony asked 100 experts then 99% is an
inaccurate result.
I asked about 200 experts in dozens of different forums
and all of them rejected my ideas out-of-hand without
even looking at them.
How did you determine "without eve looking at them"?
olcott kirjoitti 14.11.2025 klo 16.42:
On 11/14/2025 3:01 AM, Mikko wrote:
On 2025-11-13 16:00:58 +0000, olcott said:
On 11/13/2025 3:05 AM, Mikko wrote:
On 2025-11-12 14:45:34 +0000, olcott said:
Rejecting expressions of formal language
having pathological self-reference
Explained how expressions with pathological self
reference can simply be rejected as semantically/
syntactically unsound thus preventing undefinability,
and undecidability.
This sentence is not true: "This sentence is not true"
is true only because the inner sentence is semantically
unsound. The inner sentence is formalized in Minimal
Type Theory as LP := ~True(LP).
(where A := B means A is defined as B).
https://philpapers.org/rec/OLCREO
Can someone review my actual reasoning
elaborated in the paper?
If you want to use the term "formal language" you must prove that
there is a Turing machine that can determine whether a string is a
valid sentence of your language. If no such Turing machine exists
you have no justifiction for the use of the word "formal".
Been there done that and provided all the details.
Where? At least not where the above pointer points to.
In the paper that you failed to read.
https://www.researchgate.net/
publication/397533139_Rejecting_expressions_of_formal_language_having_pathological_self-reference
In that article there is no proof that there is no proof that there is a Turing machine that can determine whether a string is a valid sentence
of your lanugage. The article does not even mention Turing machine.
This is formalized in the Prolog programming language
?- LP = not(true(LP)).
LP = not(true(LP)).
?- unify_with_occurs_check(LP, not(true(LP))).
false.
On 26/11/2025 15:27, olcott wrote:
This is formalized in the Prolog programming language
?- LP = not(true(LP)).
LP = not(true(LP)).
?- unify_with_occurs_check(LP, not(true(LP))).
false.
Why do you keep posting that?
On 11/26/2025 1:46 PM, Tristan Wibberley wrote:
On 26/11/2025 15:27, olcott wrote:
This is formalized in the Prolog programming language
?- LP = not(true(LP)).
LP = not(true(LP)).
?- unify_with_occurs_check(LP, not(true(LP))).
false.
Why do you keep posting that?
Until it is accepted or someone correctly shows
how it does not once-and-for-all resolve
the Liar Paradox I will keep posting it.
The ONLY way that I can possibly have
any success with these kind of things
is to have 1000-fold more persistence
than the next most persistent person.
On 11/26/2025 4:30 AM, Mikko wrote:
olcott kirjoitti 14.11.2025 klo 16.42:
On 11/14/2025 3:01 AM, Mikko wrote:
On 2025-11-13 16:00:58 +0000, olcott said:
On 11/13/2025 3:05 AM, Mikko wrote:
On 2025-11-12 14:45:34 +0000, olcott said:
Rejecting expressions of formal language
having pathological self-reference
Explained how expressions with pathological self
reference can simply be rejected as semantically/
syntactically unsound thus preventing undefinability,
and undecidability.
This sentence is not true: "This sentence is not true"
is true only because the inner sentence is semantically
unsound. The inner sentence is formalized in Minimal
Type Theory as LP := ~True(LP).
(where A := B means A is defined as B).
https://philpapers.org/rec/OLCREO
Can someone review my actual reasoning
elaborated in the paper?
If you want to use the term "formal language" you must prove that
there is a Turing machine that can determine whether a string is a >>>>>> valid sentence of your language. If no such Turing machine exists
you have no justifiction for the use of the word "formal".
Been there done that and provided all the details.
Where? At least not where the above pointer points to.
In the paper that you failed to read.
https://www.researchgate.net/
publication/397533139_Rejecting_expressions_of_formal_language_having_pathological_self-reference
In that article there is no proof that there is no proof that there is a
Turing machine that can determine whether a string is a valid sentence
of your lanugage. The article does not even mention Turing machine.
It is all the deep meaning of unify_with_occurs_check()
that rejects an expression as semantically unsound
because its evaluation is stuck in an infinite loop.
The Liar Paradox formalized in the Prolog Programming language
This sentence is not true.
It is not true about what?
It is not true about being not true.
It is not true about being not true about what?
It is not true about being not true about being not true.
Oh I see you are stuck in a loop!
This is formalized in the Prolog programming language
?- LP = not(true(LP)).
LP = not(true(LP)).
?- unify_with_occurs_check(LP, not(true(LP))).
false.
Failing an occurs check seems to mean that the resolution of an
expression remains stuck in an infinite loop. Just as the formalized
Prolog determines that there is a cycle in the directed graph of the evaluation sequence of LP the simple English proves that the Liar
Paradox never gets to the point. It has merely been semantically unsound
all these years.
On 26/11/2025 15:27, olcott wrote:
This is formalized in the Prolog programming language
?- LP = not(true(LP)).
LP = not(true(LP)).
?- unify_with_occurs_check(LP, not(true(LP))).
false.
Why do you keep posting that?
Tristan Wibberley kirjoitti 26.11.2025 klo 21.46:
On 26/11/2025 15:27, olcott wrote:
This is formalized in the Prolog programming language
?- LP = not(true(LP)).
LP = not(true(LP)).
?- unify_with_occurs_check(LP, not(true(LP))).
false.
Why do you keep posting that?
In order to distract from the facts he doesn't want to face.
olcott kirjoitti 26.11.2025 klo 17.27:
On 11/26/2025 4:30 AM, Mikko wrote:
olcott kirjoitti 14.11.2025 klo 16.42:
On 11/14/2025 3:01 AM, Mikko wrote:
On 2025-11-13 16:00:58 +0000, olcott said:
On 11/13/2025 3:05 AM, Mikko wrote:
On 2025-11-12 14:45:34 +0000, olcott said:
Rejecting expressions of formal language
having pathological self-reference
Explained how expressions with pathological self
reference can simply be rejected as semantically/
syntactically unsound thus preventing undefinability,
and undecidability.
This sentence is not true: "This sentence is not true"
is true only because the inner sentence is semantically
unsound. The inner sentence is formalized in Minimal
Type Theory as LP := ~True(LP).
(where A := B means A is defined as B).
https://philpapers.org/rec/OLCREO
Can someone review my actual reasoning
elaborated in the paper?
If you want to use the term "formal language" you must prove that >>>>>>> there is a Turing machine that can determine whether a string is a >>>>>>> valid sentence of your language. If no such Turing machine exists >>>>>>> you have no justifiction for the use of the word "formal".
Been there done that and provided all the details.
Where? At least not where the above pointer points to.
In the paper that you failed to read.
https://www.researchgate.net/
publication/397533139_Rejecting_expressions_of_formal_language_having_pathological_self-reference
In that article there is no proof that there is no proof that there is a >>> Turing machine that can determine whether a string is a valid sentence
of your lanugage. The article does not even mention Turing machine.
It is all the deep meaning of unify_with_occurs_check()
that rejects an expression as semantically unsound
because its evaluation is stuck in an infinite loop.
The Liar Paradox formalized in the Prolog Programming language
This sentence is not true.
It is not true about what?
It is not true about being not true.
It is not true about being not true about what?
It is not true about being not true about being not true.
Oh I see you are stuck in a loop!
This is formalized in the Prolog programming language
?- LP = not(true(LP)).
LP = not(true(LP)).
?- unify_with_occurs_check(LP, not(true(LP))).
false.
Failing an occurs check seems to mean that the resolution of an
expression remains stuck in an infinite loop. Just as the formalized
Prolog determines that there is a cycle in the directed graph of the
evaluation sequence of LP the simple English proves that the Liar
Paradox never gets to the point. It has merely been semantically
unsound all these years.
All ot that is irrelevant to your lie about where the proof is.
That you chose to lie about it and then to distract is a strong
indication that you have no proof but prefer to lie otherwise.
On 11/27/2025 2:20 AM, Mikko wrote:
olcott kirjoitti 26.11.2025 klo 17.27:
On 11/26/2025 4:30 AM, Mikko wrote:
olcott kirjoitti 14.11.2025 klo 16.42:
On 11/14/2025 3:01 AM, Mikko wrote:
On 2025-11-13 16:00:58 +0000, olcott said:
On 11/13/2025 3:05 AM, Mikko wrote:
On 2025-11-12 14:45:34 +0000, olcott said:
Rejecting expressions of formal language
having pathological self-reference
Explained how expressions with pathological self
reference can simply be rejected as semantically/
syntactically unsound thus preventing undefinability,
and undecidability.
This sentence is not true: "This sentence is not true"
is true only because the inner sentence is semantically
unsound. The inner sentence is formalized in Minimal
Type Theory as LP := ~True(LP).
(where A := B means A is defined as B).
https://philpapers.org/rec/OLCREO
Can someone review my actual reasoning
elaborated in the paper?
If you want to use the term "formal language" you must prove that >>>>>>>> there is a Turing machine that can determine whether a string is a >>>>>>>> valid sentence of your language. If no such Turing machine exists >>>>>>>> you have no justifiction for the use of the word "formal".
Been there done that and provided all the details.
Where? At least not where the above pointer points to.
In the paper that you failed to read.
https://www.researchgate.net/
publication/397533139_Rejecting_expressions_of_formal_language_having_pathological_self-reference
In that article there is no proof that there is no proof that there
is a
Turing machine that can determine whether a string is a valid sentence >>>> of your lanugage. The article does not even mention Turing machine.
It is all the deep meaning of unify_with_occurs_check()
that rejects an expression as semantically unsound
because its evaluation is stuck in an infinite loop.
The Liar Paradox formalized in the Prolog Programming language
This sentence is not true.
It is not true about what?
It is not true about being not true.
It is not true about being not true about what?
It is not true about being not true about being not true.
Oh I see you are stuck in a loop!
This is formalized in the Prolog programming language
?- LP = not(true(LP)).
LP = not(true(LP)).
?- unify_with_occurs_check(LP, not(true(LP))).
false.
Failing an occurs check seems to mean that the resolution of an
expression remains stuck in an infinite loop. Just as the formalized
Prolog determines that there is a cycle in the directed graph of the
evaluation sequence of LP the simple English proves that the Liar
Paradox never gets to the point. It has merely been semantically
unsound all these years.
All ot that is irrelevant to your lie about where the proof is.
That you chose to lie about it and then to distract is a strong
indication that you have no proof but prefer to lie otherwise.
Formalized in Olcott's Minimal Type Theory
LP := ~True(LP) // LP {is defined as} ~True(LP)
that expands to ~True(~True(~True(~True(~True(~True(...)))))) https://philarchive.org/archive/PETMTT-4v2
The Liar Paradox fails because its specifies
infinite recursion.
On 11/27/2025 2:20 AM, Mikko wrote:
olcott kirjoitti 26.11.2025 klo 17.27:
On 11/26/2025 4:30 AM, Mikko wrote:
olcott kirjoitti 14.11.2025 klo 16.42:
On 11/14/2025 3:01 AM, Mikko wrote:
On 2025-11-13 16:00:58 +0000, olcott said:
On 11/13/2025 3:05 AM, Mikko wrote:
On 2025-11-12 14:45:34 +0000, olcott said:
Rejecting expressions of formal language
having pathological self-reference
Explained how expressions with pathological self
reference can simply be rejected as semantically/
syntactically unsound thus preventing undefinability,
and undecidability.
This sentence is not true: "This sentence is not true"
is true only because the inner sentence is semantically
unsound. The inner sentence is formalized in Minimal
Type Theory as LP := ~True(LP).
(where A := B means A is defined as B).
https://philpapers.org/rec/OLCREO
Can someone review my actual reasoning
elaborated in the paper?
If you want to use the term "formal language" you must prove that >>>>>>>> there is a Turing machine that can determine whether a string is a >>>>>>>> valid sentence of your language. If no such Turing machine exists >>>>>>>> you have no justifiction for the use of the word "formal".
Been there done that and provided all the details.
Where? At least not where the above pointer points to.
In the paper that you failed to read.
https://www.researchgate.net/
publication/397533139_Rejecting_expressions_of_formal_language_having_pathological_self-reference
In that article there is no proof that there is no proof that there
is a
Turing machine that can determine whether a string is a valid sentence >>>> of your lanugage. The article does not even mention Turing machine.
It is all the deep meaning of unify_with_occurs_check()
that rejects an expression as semantically unsound
because its evaluation is stuck in an infinite loop.
The Liar Paradox formalized in the Prolog Programming language
This sentence is not true.
It is not true about what?
It is not true about being not true.
It is not true about being not true about what?
It is not true about being not true about being not true.
Oh I see you are stuck in a loop!
This is formalized in the Prolog programming language
?- LP = not(true(LP)).
LP = not(true(LP)).
?- unify_with_occurs_check(LP, not(true(LP))).
false.
Failing an occurs check seems to mean that the resolution of an
expression remains stuck in an infinite loop. Just as the formalized
Prolog determines that there is a cycle in the directed graph of the
evaluation sequence of LP the simple English proves that the Liar
Paradox never gets to the point. It has merely been semantically
unsound all these years.
All ot that is irrelevant to your lie about where the proof is.
That you chose to lie about it and then to distract is a strong
indication that you have no proof but prefer to lie otherwise.
Formalized in Olcott's Minimal Type Theory
LP := ~True(LP) // LP {is defined as} ~True(LP)
that expands to ~True(~True(~True(~True(~True(~True(...)))))) https://philarchive.org/archive/PETMTT-4v2
The Liar Paradox fails because its specifies
infinite recursion.
olcott kirjoitti 27.11.2025 klo 17.49:
On 11/27/2025 2:20 AM, Mikko wrote:
olcott kirjoitti 26.11.2025 klo 17.27:
On 11/26/2025 4:30 AM, Mikko wrote:
olcott kirjoitti 14.11.2025 klo 16.42:
On 11/14/2025 3:01 AM, Mikko wrote:
On 2025-11-13 16:00:58 +0000, olcott said:
On 11/13/2025 3:05 AM, Mikko wrote:
On 2025-11-12 14:45:34 +0000, olcott said:
Rejecting expressions of formal language
having pathological self-reference
Explained how expressions with pathological self
reference can simply be rejected as semantically/
syntactically unsound thus preventing undefinability,
and undecidability.
This sentence is not true: "This sentence is not true"
is true only because the inner sentence is semantically
unsound. The inner sentence is formalized in Minimal
Type Theory as LP := ~True(LP).
(where A := B means A is defined as B).
https://philpapers.org/rec/OLCREO
Can someone review my actual reasoning
elaborated in the paper?
If you want to use the term "formal language" you must prove that >>>>>>>>> there is a Turing machine that can determine whether a string is a >>>>>>>>> valid sentence of your language. If no such Turing machine exists >>>>>>>>> you have no justifiction for the use of the word "formal".
Been there done that and provided all the details.
Where? At least not where the above pointer points to.
In the paper that you failed to read.
https://www.researchgate.net/
publication/397533139_Rejecting_expressions_of_formal_language_having_pathological_self-reference
In that article there is no proof that there is no proof that there >>>>> is a
Turing machine that can determine whether a string is a valid sentence >>>>> of your lanugage. The article does not even mention Turing machine.
It is all the deep meaning of unify_with_occurs_check()
that rejects an expression as semantically unsound
because its evaluation is stuck in an infinite loop.
The Liar Paradox formalized in the Prolog Programming language
This sentence is not true.
It is not true about what?
It is not true about being not true.
It is not true about being not true about what?
It is not true about being not true about being not true.
Oh I see you are stuck in a loop!
This is formalized in the Prolog programming language
?- LP = not(true(LP)).
LP = not(true(LP)).
?- unify_with_occurs_check(LP, not(true(LP))).
false.
Failing an occurs check seems to mean that the resolution of an
expression remains stuck in an infinite loop. Just as the formalized
Prolog determines that there is a cycle in the directed graph of the
evaluation sequence of LP the simple English proves that the Liar
Paradox never gets to the point. It has merely been semantically
unsound all these years.
All ot that is irrelevant to your lie about where the proof is.
That you chose to lie about it and then to distract is a strong
indication that you have no proof but prefer to lie otherwise.
Formalized in Olcott's Minimal Type Theory
LP := ~True(LP) // LP {is defined as} ~True(LP)
that expands to ~True(~True(~True(~True(~True(~True(...))))))
https://philarchive.org/archive/PETMTT-4v2
The Liar Paradox fails because its specifies
infinite recursion.
Irrelevant to your lie about where the proof is. That you chose to lie
about it and then to distract is a strong indication that you have no
proof but prefer to lie otherwise.
On 11/28/2025 2:45 AM, Mikko wrote:
olcott kirjoitti 27.11.2025 klo 17.49:
On 11/27/2025 2:20 AM, Mikko wrote:
olcott kirjoitti 26.11.2025 klo 17.27:
On 11/26/2025 4:30 AM, Mikko wrote:
olcott kirjoitti 14.11.2025 klo 16.42:
On 11/14/2025 3:01 AM, Mikko wrote:
On 2025-11-13 16:00:58 +0000, olcott said:
On 11/13/2025 3:05 AM, Mikko wrote:
On 2025-11-12 14:45:34 +0000, olcott said:
Rejecting expressions of formal language
having pathological self-reference
Explained how expressions with pathological self
reference can simply be rejected as semantically/
syntactically unsound thus preventing undefinability,
and undecidability.
This sentence is not true: "This sentence is not true"
is true only because the inner sentence is semantically
unsound. The inner sentence is formalized in Minimal
Type Theory as LP := ~True(LP).
(where A := B means A is defined as B).
https://philpapers.org/rec/OLCREO
Can someone review my actual reasoning
elaborated in the paper?
If you want to use the term "formal language" you must prove that >>>>>>>>>> there is a Turing machine that can determine whether a string >>>>>>>>>> is a
valid sentence of your language. If no such Turing machine exists >>>>>>>>>> you have no justifiction for the use of the word "formal".
Been there done that and provided all the details.
Where? At least not where the above pointer points to.
In the paper that you failed to read.
https://www.researchgate.net/
publication/397533139_Rejecting_expressions_of_formal_language_having_pathological_self-reference
In that article there is no proof that there is no proof that
there is a
Turing machine that can determine whether a string is a valid
sentence
of your lanugage. The article does not even mention Turing machine. >>>>>>
It is all the deep meaning of unify_with_occurs_check()
that rejects an expression as semantically unsound
because its evaluation is stuck in an infinite loop.
The Liar Paradox formalized in the Prolog Programming language
This sentence is not true.
It is not true about what?
It is not true about being not true.
It is not true about being not true about what?
It is not true about being not true about being not true.
Oh I see you are stuck in a loop!
This is formalized in the Prolog programming language
?- LP = not(true(LP)).
LP = not(true(LP)).
?- unify_with_occurs_check(LP, not(true(LP))).
false.
Failing an occurs check seems to mean that the resolution of an
expression remains stuck in an infinite loop. Just as the
formalized Prolog determines that there is a cycle in the directed
graph of the evaluation sequence of LP the simple English proves
that the Liar Paradox never gets to the point. It has merely been
semantically unsound all these years.
All ot that is irrelevant to your lie about where the proof is.
That you chose to lie about it and then to distract is a strong
indication that you have no proof but prefer to lie otherwise.
Formalized in Olcott's Minimal Type Theory
LP := ~True(LP) // LP {is defined as} ~True(LP)
that expands to ~True(~True(~True(~True(~True(~True(...))))))
https://philarchive.org/archive/PETMTT-4v2
The Liar Paradox fails because its specifies
infinite recursion.
Irrelevant to your lie about where the proof is. That you chose to lie
about it and then to distract is a strong indication that you have no
proof but prefer to lie otherwise.
In other words you are simply ignoring this ~True(~True(~True(~True(~True(~True(...))))))
On 11/26/2025 1:46 PM, Tristan Wibberley wrote:
On 26/11/2025 15:27, olcott wrote:
This is formalized in the Prolog programming language
?- LP = not(true(LP)).
LP = not(true(LP)).
?- unify_with_occurs_check(LP, not(true(LP))).
false.
Why do you keep posting that?
Until it is accepted or someone correctly shows
how it does not once-and-for-all resolve
the Liar Paradox I will keep posting it.
On 26/11/2025 20:07, olcott wrote:
On 11/26/2025 1:46 PM, Tristan Wibberley wrote:
On 26/11/2025 15:27, olcott wrote:
This is formalized in the Prolog programming language
?- LP = not(true(LP)).
LP = not(true(LP)).
?- unify_with_occurs_check(LP, not(true(LP))).
false.
Why do you keep posting that?
Until it is accepted or someone correctly shows
how it does not once-and-for-all resolve
the Liar Paradox I will keep posting it.
Can you show how it /does/ ?
| Sysop: | DaiTengu |
|---|---|
| Location: | Appleton, WI |
| Users: | 1,089 |
| Nodes: | 10 (0 / 10) |
| Uptime: | 153:45:26 |
| Calls: | 13,921 |
| Calls today: | 2 |
| Files: | 187,021 |
| D/L today: |
3,744 files (941M bytes) |
| Messages: | 2,457,162 |