IMHO, a need for a common name for IEEE binary128 exists for quite some
time. For IEEE binary256 the real need didn't emerge yet. But it will
emerge in the hopefully near future.
C
no longer offers a great performance advantage over a higher-level
language, so you might as well use the higher-level language.
On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:
IMHO, a need for a common name for IEEE binary128 exists for quite some
time. For IEEE binary256 the real need didn't emerge yet. But it will
emerge in the hopefully near future.
A thought: the main advantage of binary types over decimal is supposed to
be speed. Once you get up to larger precisions like that, the speed advantage becomes less clear, particularly since hardware support doesn’t seem forthcoming any time soon. There are already variable-precision
decimal floating-point libraries available. And with such calculations, C
no longer offers a great performance advantage over a higher-level
language, so you might as well use the higher-level language.
<https://docs.python.org/3/library/decimal.html>
On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:
IMHO, a need for a common name for IEEE binary128 exists for quite some
time. For IEEE binary256 the real need didn't emerge yet. But it will
emerge in the hopefully near future.
A thought: the main advantage of binary types over decimal is supposed to
be speed. Once you get up to larger precisions like that, the speed
advantage becomes less clear, particularly since hardware support doesn’t seem forthcoming any time soon. There are already variable-precision
decimal floating-point libraries available. And with such calculations, C
no longer offers a great performance advantage over a higher-level
language, so you might as well use the higher-level language.
<https://docs.python.org/3/library/decimal.html>
(And at the risk of incurring Richard's wrath, I would suggest
C++ is an even better language choice in such cases.)
On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:
IMHO, a need for a common name for IEEE binary128 exists for quite some
time. For IEEE binary256 the real need didn't emerge yet. But it will
emerge in the hopefully near future.
A thought: the main advantage of binary types over decimal is supposed to
be speed. Once you get up to larger precisions like that, the speed advantage becomes less clear, particularly since hardware support doesn’t seem forthcoming any time soon. There are already variable-precision
decimal floating-point libraries available. And with such calculations, C
no longer offers a great performance advantage over a higher-level
language, so you might as well use the higher-level language.
<https://docs.python.org/3/library/decimal.html>
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:
IMHO, a need for a common name for IEEE binary128 exists for quite some
time. For IEEE binary256 the real need didn't emerge yet. But it will
emerge in the hopefully near future.
A thought: the main advantage of binary types over decimal is supposed to
be speed. Once you get up to larger precisions like that, the speed
advantage becomes less clear, particularly since hardware support doesn’t >> seem forthcoming any time soon. There are already variable-precision
decimal floating-point libraries available. And with such calculations, C
no longer offers a great performance advantage over a higher-level
language, so you might as well use the higher-level language.
<https://docs.python.org/3/library/decimal.html>
When working with such (low for me) precisions dynamic allocation
of memory is major cost item, frequently more important than
calculation. To avoid this cost one needs stack allocatation.
That is one reason to make them built-in, as in this case
compiler presumably knows about them and can better manage
allocation and copies.
Also, when using binary underlying representation decimal rounding
is much more expensive than binary one, so with such representation
cost of decimal computation is significantly higher. Without
hardware support making representation decimal makes computations
for all sizes much more expensive.
Floating point computations naturally are approximate. In most
cases exact details of rounding do not matter much. It basically
that with round to even rule one get somewhat better error
propagation and people want to have a fixed rule to get
reproducible results. But insisting on decimal rounding
normally is not needed. To put it differently, decimal floating
point is a marketing stint by IBM. Bored coders may code
decimal software libraries for various languages, but it
does not mean that such libraries have much sense.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:
IMHO, a need for a common name for IEEE binary128 exists for quite
some time. For IEEE binary256 the real need didn't emerge yet. But
it will emerge in the hopefully near future.
A thought: the main advantage of binary types over decimal is
supposed to be speed. Once you get up to larger precisions like
that, the speed advantage becomes less clear, particularly since
hardware support doesn’t seem forthcoming any time soon. There are already variable-precision decimal floating-point libraries
available. And with such calculations, C no longer offers a great performance advantage over a higher-level language, so you might as
well use the higher-level language.
<https://docs.python.org/3/library/decimal.html>
I think there's an implicit assumption that, all else being equal,
decimal is better than binary. That's true in some contexts,
but not in all.
If you're performing calculations on physical quantities, decimal
probably has no particular advantages, and binary is likely to be
more efficient in both time and space.
The advantagers of decimal show up if you're formatting a *lot*
of numbers in human-readable form (but nobody has time to read a
billion numbers), or if you're working with money. But for financial calculations, particularly compound interest, there are likely to
be precise regulations about how to round results. A given decimal floating-point format might or might not satisfy those regulations.
On Thu, 26 Jun 2025 12:31:32 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote: =20=20
IMHO, a need for a common name for IEEE binary128 exists for quite
some time. For IEEE binary256 the real need didn't emerge yet. But
it will emerge in the hopefully near future. =20
A thought: the main advantage of binary types over decimal is
supposed to be speed. Once you get up to larger precisions like
that, the speed advantage becomes less clear, particularly since
hardware support doesn=E2=80=99t seem forthcoming any time soon. There = >are
already variable-precision decimal floating-point libraries
available. And with such calculations, C no longer offers a great
performance advantage over a higher-level language, so you might as
well use the higher-level language.
<https://docs.python.org/3/library/decimal.html> =20
I think there's an implicit assumption that, all else being equal,
decimal is better than binary. That's true in some contexts,
but not in all.
=20
My implicit assumption is that other sings being equal binary is
better than anything else because it has the lowest variation in ULP to
value ratio.=20
The fact that other things being equal binary fp also tends to be
faster is a nice secondary advantage. For example, it is easy to
imagine hardware that implements S/360 style hex floating point as fast
or a little faster than binary fp, but numerec properties of it are
much worse then sane implementations of binary fp.
When working with such (low for me) precisions dynamic allocation of
memory is major cost item, frequently more important than calculation.
To avoid this cost one needs stack allocatation.
Also, when using binary underlying representation decimal rounding
is much more expensive than binary one, so with such representation
cost of decimal computation is significantly higher.
Floating point computations naturally are approximate. In most cases
exact details of rounding do not matter much.
To put it differently, decimal floating point is a marketing stint by
IBM.
But not all decimal floating point implementations used "hex floating point".
Burroughs medium systems had BCD floating point - one of the advantages
was that it could exactly represent any floating point number that
could be specified with a 100 digit mantissa and a 2 digit exponent.
On 26/06/2025 10:01, Lawrence D'Oliveiro wrote:
C no longer offers a great performance advantage over a higher-level
language, so you might as well use the higher-level language.
Nothing is stopping you, but then comp.lang.c no longer offers you the facility to discuss your chosen language, so you might as well use the higher-level language's group.
[...]if C is going to become more suitable for such high-
precision calculations, it might need to become more Python-like.
scott@slp53.sl.home (Scott Lurndal) writes:
[...]
But not all decimal floating point implementations used "hex floating point".
Burroughs medium systems had BCD floating point - one of the advantages
was that it could exactly represent any floating point number that
could be specified with a 100 digit mantissa and a 2 digit exponent.
BCD uses 4 bits to represent values from 0 to 9. That's about 83%
efficent relative to pure binary. (And it still can't represent 1/3.)
[...]
On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:
[...]if C is going to become more suitable for such high-
precision calculations, it might need to become more Python-like.
C is not in search of a reason to exist. If you want Python, you know
where to find it.
On Thu, 26 Jun 2025 12:51:19 -0000 (UTC), Waldek Hebisch wrote:
When working with such (low for me) precisions dynamic allocation of
memory is major cost item, frequently more important than calculation.
To avoid this cost one needs stack allocatation.
What you may not realize is that, on current machines, there is about a 100:1 speed difference between accessing CPU registers and accessing main memory.
Whether that main memory access is doing “stack allocation” or “heap allocation” is going to make very little difference to this.
Also, when using binary underlying representation decimal rounding
is much more expensive than binary one, so with such representation
cost of decimal computation is significantly higher.
This may take more computation, but if the calculation time is dominated
by memory access time to all those digits, how much difference is that
going to make, really?
Floating point computations naturally are approximate. In most cases
exact details of rounding do not matter much.
It often surprises you when they do. That’s why a handy rule of thumb is to test your calculation with all four IEEE 754 rounding modes, to ensure that the variation in the result remains minor. If it doesn’t ... then watch out.
To put it differently, decimal floating point is a marketing stint by
IBM.
Not sure IBM has any marketing power left to inflict their own ideas on
the computing industry. Decimal calculations just make sense because the results are less surprising to normal people.
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Thu, 26 Jun 2025 12:51:19 -0000 (UTC), Waldek Hebisch wrote:
When working with such (low for me) precisions dynamic allocation of
memory is major cost item, frequently more important than calculation.
To avoid this cost one needs stack allocatation.
What you may not realize is that, on current machines, there is about a
100:1 speed difference between accessing CPU registers and accessing main
memory.
Whether that main memory access is doing “stack allocation” or “heap >> allocation” is going to make very little difference to this.
Did you measure things? CPU has caches and cache friendly code
makes a difference. Avoiding dynamic allocation helps, that is
measurable. Rational explanation is that stack allocated things
do not move and have close to zero cost to manage. Moving stuff
leads to cache misses.
Michael S <already5chosen@yahoo.com> writes:
On Thu, 26 Jun 2025 12:31:32 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:are
On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote: =20
IMHO, a need for a common name for IEEE binary128 exists for
quite some time. For IEEE binary256 the real need didn't emerge
yet. But it will emerge in the hopefully near future. =20
A thought: the main advantage of binary types over decimal is
supposed to be speed. Once you get up to larger precisions like
that, the speed advantage becomes less clear, particularly since
hardware support doesn=E2=80=99t seem forthcoming any time soon.
There =
already variable-precision decimal floating-point libraries=20
available. And with such calculations, C no longer offers a great
performance advantage over a higher-level language, so you might
as well use the higher-level language.
<https://docs.python.org/3/library/decimal.html> =20
I think there's an implicit assumption that, all else being equal,
decimal is better than binary. That's true in some contexts,
but not in all.
=20
My implicit assumption is that other sings being equal binary is
better than anything else because it has the lowest variation in ULP
to value ratio.=20
The fact that other things being equal binary fp also tends to be
faster is a nice secondary advantage. For example, it is easy to
imagine hardware that implements S/360 style hex floating point as
fast or a little faster than binary fp, but numerec properties of it
are much worse then sane implementations of binary fp.
But not all decimal floating point implementations used "hex floating
point".
Burroughs medium systems had BCD floating point - one of the
advantages was that it could exactly represent any floating point
number that could be specified with a 100 digit mantissa and a 2
digit exponent.
This was a memory-to-memory architecture, so no floating point
registers to worry about.
For financial calculations, a fixed point format (up to 100 digits)
was used. Using an implicit decimal point, rounding was a matter of
where the implicit decimal point was located in the up to 100 digit
field; so do your calculations in mills and truncate the result field
to the desired precision.
On Thu, 26 Jun 2025 12:51:19 -0000 (UTC), Waldek Hebisch wrote:
When working with such (low for me) precisions dynamic allocation of
memory is major cost item, frequently more important than calculation.
To avoid this cost one needs stack allocatation.
What you may not realize is that, on current machines, there is about a >100:1 speed difference between accessing CPU registers and accessing main >memory.
scott@slp53.sl.home (Scott Lurndal) writes:
[...]
But not all decimal floating point implementations used "hex floating point".
Burroughs medium systems had BCD floating point - one of the advantages
was that it could exactly represent any floating point number that
could be specified with a 100 digit mantissa and a 2 digit exponent.
BCD uses 4 bits to represent values from 0 to 9. That's about 83%
efficent relative to pure binary. (And it still can't represent 1/3.)
On Thu, 26 Jun 2025 21:09:37 GMT
scott@slp53.sl.home (Scott Lurndal) wrote:
[..]
For fix point, anything "decimal" is even less useful than in floating
point. I can't find any good explanation for use of "decimal" things in
some early computers [...]
On 27.06.2025 02:10, Keith Thompson wrote:
scott@slp53.sl.home (Scott Lurndal) writes:
[...]
But not all decimal floating point implementations used "hex floating point".
Burroughs medium systems had BCD floating point - one of the advantages
was that it could exactly represent any floating point number that
could be specified with a 100 digit mantissa and a 2 digit exponent.
BCD uses 4 bits to represent values from 0 to 9. That's about 83%
efficent relative to pure binary. (And it still can't represent 1/3.)
That's a problem of where your numbers stem from. "1/3" is a formula!
On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:
[...]if C is going to become more suitable for such high-
precision calculations, it might need to become more Python-like.
C is not in search of a reason to exist. If you want Python, you know
where to find it.
For fix point, anything "decimal" is even less useful than in floating
point. I can't find any good explanation for use of "decimal" things in
some early computers except that their designers were, may be, good engineers, but 2nd rate thinkers.
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:floating point".
On 27.06.2025 02:10, Keith Thompson wrote:
scott@slp53.sl.home (Scott Lurndal) writes:
[...]
But not all decimal floating point implementations used "hex
Burroughs medium systems had BCD floating point - one of the advantages >>>> was that it could exactly represent any floating point number that
could be specified with a 100 digit mantissa and a 2 digit exponent.
BCD uses 4 bits to represent values from 0 to 9. That's about 83%
efficent relative to pure binary. (And it still can't represent 1/3.)
That's a problem of where your numbers stem from. "1/3" is a formula!
1/3 is also a C expression with the value 0. But what I was
referring to was the real number 1/3, the unique real number that
yields one when multiplied by three.
My point is that any choice of radix in a floating-point format
means that there are going to be some useful real numbers you
can't represent.
That's as true of decimal as it is of binary.
(Trinary can represent 1/3, but can't represent 1/2.)
Decimal can represent any number that can be exactly represented in
binary *if* you have enough digits (because 10 is multiple of 2),
and many numbers like 0.1 that can't be represented exactly in
binary, but at a cost -- that is worth paying in some contexts.
(Scaled integers might sometimes be a good alternative).
I doubt that I'm saying anything you don't already know. I just
wanted to clarify what I meant.
[ Some technical troubles - in case this post appeared already 30...
minutes ago (I don't see it), please ignore this re-sent post. ]
On 28.06.2025 02:56, Keith Thompson wrote:
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
On 27.06.2025 02:10, Keith Thompson wrote:
BCD uses 4 bits to represent values from 0 to 9. That's about 83%
efficent relative to pure binary. (And it still can't represent 1/3.)
That's a problem of where your numbers stem from. "1/3" is a formula!
1/3 is also a C expression with the value 0. But what I was
referring to was the real number 1/3, the unique real number that
yields one when multiplied by three.
Yes, sure. That was also how I interpreted it; that you meant (in
"C" parlance) 1.0/3.0.
On 2025-06-28 23:03, Janis Papanagnou wrote:
[ Some technical troubles - in case this post appeared already 30...
minutes ago (I don't see it), please ignore this re-sent post. ]
On 28.06.2025 02:56, Keith Thompson wrote:
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
On 27.06.2025 02:10, Keith Thompson wrote:
BCD uses 4 bits to represent values from 0 to 9. That's about 83%That's a problem of where your numbers stem from. "1/3" is a formula!
efficent relative to pure binary. (And it still can't represent 1/3.) >>>>
1/3 is also a C expression with the value 0. But what I was
referring to was the real number 1/3, the unique real number that
yields one when multiplied by three.
Yes, sure. That was also how I interpreted it; that you meant (in
"C" parlance) 1.0/3.0.
No, it is very much the point that the C expression 1.0/3.0 cannot have
the value he's talking about (except in the unlikely event that
FLT_RADIX is a multiple of 3).
On 28.06.2025 02:56, Keith Thompson wrote:
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:floating point".
On 27.06.2025 02:10, Keith Thompson wrote:
scott@slp53.sl.home (Scott Lurndal) writes:
[...]
But not all decimal floating point implementations used "hex
Burroughs medium systems had BCD floating point - one of the advantages >>>>> was that it could exactly represent any floating point number that
could be specified with a 100 digit mantissa and a 2 digit exponent.
BCD uses 4 bits to represent values from 0 to 9. That's about 83%
efficent relative to pure binary. (And it still can't represent 1/3.)
That's a problem of where your numbers stem from. "1/3" is a formula!
1/3 is also a C expression with the value 0. But what I was
referring to was the real number 1/3, the unique real number that
yields one when multiplied by three.
Yes, sure. That was also how I interpreted it; that you meant (in
"C" parlance) 1.0/3.0.
My point is that any choice of radix in a floating-point format
means that there are going to be some useful real numbers you
can't represent.
Yes, sure. Sqrt(2.0) for example, or 'pi', or 'e', or your 1.0/3.0
example. These numbers have in common that there's no finite length
standard representation; you usually represent them as formulas (as
in your example), or in computers as constants in abbreviated form.
In numerics you have various places where errors appear in principle
and accumulate. One of the errors is when transferred from (and to)
external representation. Another one is when performing calculations
with internally imprecise represented numbers.
The point with decimal encoding addresses the lossless (and fast[*]) input/output of given [finite] numbers. Numbers that have been (and
are) used e.g. in financial contexts (Billions of Euros and Cents).
And you can also perform exact arithmetic in the typical operations
(sum, multiply, subtract)[**] without errors.[***]
On 26/06/2025 10:01, Lawrence D'Oliveiro wrote:
C
no longer offers a great performance advantage over a higher-level
language, so you might as well use the higher-level language.
Nothing is stopping you, but then comp.lang.c no longer offers
you the facility to discuss your chosen language, so you might as
well use the higher-level language's group.
(compile-toplevel '(expt 2 150))#<sys:vm-desc: a103620>
(disassemble *1)data:
On Fri, 27 Jun 2025 14:52:42 +0300, Michael S wrote:
For fix point, anything "decimal" is even less useful than in floating
point. I can't find any good explanation for use of "decimal" things in
some early computers except that their designers were, may be, good
engineers, but 2nd rate thinkers.
IEEE-754 now includes decimal floating-point formats in addition to the older binary ones. I think this was originally a separate spec (IEEE-854), but it got rolled into the 2008 revision of IEEE-754.
On 2025-06-26, Richard Heathfield <rjh@cpax.org.uk> wrote:
On 26/06/2025 10:01, Lawrence D'Oliveiro wrote:
C
no longer offers a great performance advantage over a higher-level
language, so you might as well use the higher-level language.
Nothing is stopping you, but then comp.lang.c no longer offers
you the facility to discuss your chosen language, so you might as
well use the higher-level language's group.
Even a broken clock is right once or twice in a 24h period.
He did say that this advantage was in the manipulation
of multi-precision integers, like big decimals.
Indeed, most of the time is spent int the math routines themselves,
not in what dispatches them, Calculations written in C, using a
certain bignum libarary won't be much faster than the same
calculations in a higher level language, using the same bignum
library.
A higher level language may also have a compiler which does
optimizations on the bignum code, such as CSE and constant folding,
basically treating it the same like fixnum integers.
C code consisting of calls into a bignum library will not be
aggressively optimized. If you wastefully perform a calculation
with constants that could be done at compile time, it almost
certainly won't be.
Example:
(compile-toplevel '(expt 2 150))#<sys:vm-desc: a103620>
(disassemble *1)data:
0: 1427247692705959881058285969449495136382746624
syms:
code:
0: 10000400 end d0
instruction count:
1
#<sys:vm-desc: a103620>
The compiled code just retrieves the bignum integer result from
static data register d0. This is just from the compiler finding
"expt" to be in a list of functions that are reducible at compile
time over constant inputs; no special reasoning about large integers.
But if you were to write the C code to initialize a bignum from 5,
and one from 150, and then call the bignum exponentiation routine, I
doubt you'd get the compiler to optimize all that away.
Maybe with a sufficiently advanced link-time optimization ...
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
On 28.06.2025 02:56, Keith Thompson wrote:
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:floating point".
On 27.06.2025 02:10, Keith Thompson wrote:
scott@slp53.sl.home (Scott Lurndal) writes:
[...]
But not all decimal floating point implementations used "hex
That's a problem of where your numbers stem from. "1/3" is a formula!BCD uses 4 bits to represent values from 0 to 9. That's about 83%
Burroughs medium systems had BCD floating point - one of the advantages >>>>>> was that it could exactly represent any floating point number that >>>>>> could be specified with a 100 digit mantissa and a 2 digit exponent. >>>>>
efficent relative to pure binary. (And it still can't represent 1/3.) >>>>
1/3 is also a C expression with the value 0. But what I was
referring to was the real number 1/3, the unique real number that
yields one when multiplied by three.
Yes, sure. That was also how I interpreted it; that you meant (in
"C" parlance) 1.0/3.0.
As mentioned elsethread, I was referring to the real value.
1.0/3.0 as a C expression yields a value of type double, typically 0.333333333333333314829616256247390992939472198486328125 or
[...]
In numerics you have various places where errors appear in principle
and accumulate. One of the errors is when transferred from (and to)
external representation. Another one is when performing calculations
with internally imprecise represented numbers.
The point with decimal encoding addresses the lossless (and fast[*])
input/output of given [finite] numbers. Numbers that have been (and
are) used e.g. in financial contexts (Billions of Euros and Cents).
And you can also perform exact arithmetic in the typical operations
(sum, multiply, subtract)[**] without errors.[***]
Which is convenient only because we happen to use decimal notation
when writing numbers.
[...]
On 2025-06-28 23:03, Janis Papanagnou wrote:
[ Some technical troubles - in case this post appeared already 30...
minutes ago (I don't see it), please ignore this re-sent post. ]
On 28.06.2025 02:56, Keith Thompson wrote:
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
On 27.06.2025 02:10, Keith Thompson wrote:
BCD uses 4 bits to represent values from 0 to 9. That's about 83%That's a problem of where your numbers stem from. "1/3" is a formula!
efficent relative to pure binary. (And it still can't represent 1/3.) >>>>
1/3 is also a C expression with the value 0. But what I was
referring to was the real number 1/3, the unique real number that
yields one when multiplied by three.
Yes, sure. That was also how I interpreted it; that you meant (in
"C" parlance) 1.0/3.0.
No, it is very much the point that the C expression 1.0/3.0 cannot have
the value he's talking about [...]
In my posts, all values of the form "1.3333..." (or similar) should
of course have been "0.3333..." (or the respective forms), as a representation of '1/3'. - Sorry for the typos!
On 29.06.2025 05:18, James Kuyper wrote:
On 2025-06-28 23:03, Janis Papanagnou wrote:
[ Some technical troubles - in case this post appeared already 30...
minutes ago (I don't see it), please ignore this re-sent post. ]
On 28.06.2025 02:56, Keith Thompson wrote:
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
On 27.06.2025 02:10, Keith Thompson wrote:
1/3 is also a C expression with the value 0. But what I wasBCD uses 4 bits to represent values from 0 to 9. That's about 83%That's a problem of where your numbers stem from. "1/3" is a formula! >>>>
efficent relative to pure binary. (And it still can't represent 1/3.) >>>>>
referring to was the real number 1/3, the unique real number that
yields one when multiplied by three.
Yes, sure. That was also how I interpreted it; that you meant (in
"C" parlance) 1.0/3.0.
No, it is very much the point that the C expression 1.0/3.0 cannot have
the value he's talking about [...]
I was talking about the Real Value. Indicated by the formula '1/3'.
When Keith spoke about that being '0' I refined it to '1.0/3.0' to
address this misunderstanding. (That's all to say here about that.)
On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:
[...]if C is going to become more suitable for such high-
precision calculations, it might need to become more Python-like.
C is not in search of a reason to exist.
On Fri, 27 Jun 2025 02:40:58 +0100, Richard Heathfield wrote:
On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:
[...]if C is going to become more suitable for such high-
precision calculations, it might need to become more Python-like.
C is not in search of a reason to exist.
Not in traditional fixed-precision arithmetic, anyway -- at least after it fully embraced IEEE 754.
With higher-precision arithmetic, on the other hand, the traditional C paradigms may not be so suitable.
On 15/07/2025 20:41, Lawrence D'Oliveiro wrote:
On Fri, 27 Jun 2025 02:40:58 +0100, Richard Heathfield wrote:
On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:
[...]if C is going to become more suitable for such high-
precision calculations, it might need to become more Python-like.
C is not in search of a reason to exist.
Not in traditional fixed-precision arithmetic, anyway -- at least
after it fully embraced IEEE 754.
With higher-precision arithmetic, on the other hand, the
traditional C paradigms may not be so suitable.
If you want something else, you know where to find it. There is no
value in eroding the differences in all languages until only one
universal language emerges. Vivat differentia.
On Wed, 16 Jul 2025 03:55:14 +0100, Richard Heathfield wrote:
On 15/07/2025 20:41, Lawrence D'Oliveiro wrote:
On Fri, 27 Jun 2025 02:40:58 +0100, Richard Heathfield wrote:
On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:
[...]if C is going to become more suitable for such high-
precision calculations, it might need to become more Python-like.
C is not in search of a reason to exist.
Not in traditional fixed-precision arithmetic, anyway -- at least
after it fully embraced IEEE 754.
With higher-precision arithmetic, on the other hand, the
traditional C paradigms may not be so suitable.
If you want something else, you know where to find it. There is no
value in eroding the differences in all languages until only one
universal language emerges. Vivat differentia.
You sound as though you don’t want languages copying ideas from each
other.
On 20/07/2025 01:16, Lawrence D'Oliveiro wrote:
On Wed, 16 Jul 2025 03:55:14 +0100, Richard Heathfield wrote:
[...]
You sound as though you don’t want languages copying ideas from each
other.
[...]
There's nothing wrong with new languages pinching ideas from old
languages - that's creativity and progress, especially when those ideas
are combined in new and interesting ways, and you can keep on adding
those ideas right up until your second reference implementation goes
public.
But going the other way turns a programming language into a constantly
moving target that it's impossible for more than a handful of people to master - the handful in question being those who decide what's in and
what's out. This is bad for programmers' expertise and bad for the
industry.
It's somewhat more complicated than that. IEEE-784 is a
radix-independent standard, otherwise equivalent to IEEE-754.
Huge numbers of systems already use the perfectly reasonable POSIX
epoch, 1970-01-01 00:00:00 UTC. I can think of no good reason to
standardize anything else.
On Sun, 29 Jun 2025 09:23:01 -0400, James Kuyper wrote:
It's somewhat more complicated than that. IEEE-784 is a
radix-independent standard, otherwise equivalent to IEEE-754.
Did you mean IEEE-854?
Astronomers count Julian Day Numbers from 4713 BC proleptic Julian.
This was chosen to ensure that all astronomical observations or events
in recorded history have positive dates.
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,064 |
Nodes: | 10 (0 / 10) |
Uptime: | 170:40:19 |
Calls: | 13,692 |
Files: | 186,936 |
D/L today: |
89 files (19,132K bytes) |
Messages: | 2,411,676 |