• Re: "A diagram of C23 basic types"

    From Lawrence D'Oliveiro@ldo@nz.invalid to comp.lang.c on Thu Jun 26 09:01:20 2025
    From Newsgroup: comp.lang.c

    On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:

    IMHO, a need for a common name for IEEE binary128 exists for quite some
    time. For IEEE binary256 the real need didn't emerge yet. But it will
    emerge in the hopefully near future.

    A thought: the main advantage of binary types over decimal is supposed to
    be speed. Once you get up to larger precisions like that, the speed
    advantage becomes less clear, particularly since hardware support doesn’t seem forthcoming any time soon. There are already variable-precision
    decimal floating-point libraries available. And with such calculations, C
    no longer offers a great performance advantage over a higher-level
    language, so you might as well use the higher-level language.

    <https://docs.python.org/3/library/decimal.html>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.lang.c on Thu Jun 26 13:27:29 2025
    From Newsgroup: comp.lang.c

    On 26/06/2025 10:01, Lawrence D'Oliveiro wrote:
    C
    no longer offers a great performance advantage over a higher-level
    language, so you might as well use the higher-level language.

    Nothing is stopping you, but then comp.lang.c no longer offers
    you the facility to discuss your chosen language, so you might as
    well use the higher-level language's group.
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.lang.c on Thu Jun 26 12:51:19 2025
    From Newsgroup: comp.lang.c

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:

    IMHO, a need for a common name for IEEE binary128 exists for quite some
    time. For IEEE binary256 the real need didn't emerge yet. But it will
    emerge in the hopefully near future.

    A thought: the main advantage of binary types over decimal is supposed to
    be speed. Once you get up to larger precisions like that, the speed advantage becomes less clear, particularly since hardware support doesn’t seem forthcoming any time soon. There are already variable-precision
    decimal floating-point libraries available. And with such calculations, C
    no longer offers a great performance advantage over a higher-level
    language, so you might as well use the higher-level language.

    <https://docs.python.org/3/library/decimal.html>

    When working with such (low for me) precisions dynamic allocation
    of memory is major cost item, frequently more important than
    calculation. To avoid this cost one needs stack allocatation.
    That is one reason to make them built-in, as in this case
    compiler presumably knows about them and can better manage
    allocation and copies.

    Also, when using binary underlying representation decimal rounding
    is much more expensive than binary one, so with such representation
    cost of decimal computation is significantly higher. Without
    hardware support making representation decimal makes computations
    for all sizes much more expensive.

    Floating point computations naturally are approximate. In most
    cases exact details of rounding do not matter much. It basically
    that with round to even rule one get somewhat better error
    propagation and people want to have a fixed rule to get
    reproducible results. But insisting on decimal rounding
    normally is not needed. To put it differently, decimal floating
    point is a marketing stint by IBM. Bored coders may code
    decimal software libraries for various languages, but it
    does not mean that such libraries have much sense.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Thu Jun 26 15:57:04 2025
    From Newsgroup: comp.lang.c

    On 26/06/2025 11:01, Lawrence D'Oliveiro wrote:
    On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:

    IMHO, a need for a common name for IEEE binary128 exists for quite some
    time. For IEEE binary256 the real need didn't emerge yet. But it will
    emerge in the hopefully near future.

    A thought: the main advantage of binary types over decimal is supposed to
    be speed. Once you get up to larger precisions like that, the speed
    advantage becomes less clear, particularly since hardware support doesn’t seem forthcoming any time soon. There are already variable-precision
    decimal floating-point libraries available. And with such calculations, C
    no longer offers a great performance advantage over a higher-level
    language, so you might as well use the higher-level language.

    <https://docs.python.org/3/library/decimal.html>

    That is certainly a valid viewpoint. Much of this depends on what you
    are doing, how big the types are, what you are doing with them, how much
    of your code is calculations, and what other things you are doing.

    If you are doing lots of calculations with big numbers of various sizes,
    then Python code using numpy will often be faster than writing C code
    directly - you can concentrate on writing better algorithms instead of
    all the low-level memory management and bureaucracy you have in a lower
    level language. (Of course the hard work in libraries like numpy is
    done in code written in C, Fortran, C++, or other low-level languages.)

    But if you are using a type that is small enough to fit sensibly on the
    stack, and to have a fixed size (rather than arbitrary sized number
    types), then it is likely to be more efficient to define them as structs
    in C and use them directly. Depending on what you are doing with them,
    you might be better off using decimal-based types rather than
    binary-based types. (And at the risk of incurring Richard's wrath, I
    would suggest C++ is an even better language choice in such cases.)

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.lang.c on Thu Jun 26 16:10:55 2025
    From Newsgroup: comp.lang.c

    On 26/06/2025 14:57, David Brown wrote:
    (And at the risk of incurring Richard's wrath, I would suggest
    C++ is an even better language choice in such cases.)

    As you know, David, I hate to agree with you...
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    ...but operator overloading for the win.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Thu Jun 26 12:31:32 2025
    From Newsgroup: comp.lang.c

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:
    IMHO, a need for a common name for IEEE binary128 exists for quite some
    time. For IEEE binary256 the real need didn't emerge yet. But it will
    emerge in the hopefully near future.

    A thought: the main advantage of binary types over decimal is supposed to
    be speed. Once you get up to larger precisions like that, the speed advantage becomes less clear, particularly since hardware support doesn’t seem forthcoming any time soon. There are already variable-precision
    decimal floating-point libraries available. And with such calculations, C
    no longer offers a great performance advantage over a higher-level
    language, so you might as well use the higher-level language.

    <https://docs.python.org/3/library/decimal.html>

    I think there's an implicit assumption that, all else being equal,
    decimal is better than binary. That's true in some contexts,
    but not in all.

    If you're performing calculations on physical quantities, decimal
    probably has no particular advantages, and binary is likely to be
    more efficient in both time and space.

    The advantagers of decimal show up if you're formatting a *lot*
    of numbers in human-readable form (but nobody has time to read a
    billion numbers), or if you're working with money. But for financial calculations, particularly compound interest, there are likely to
    be precise regulations about how to round results. A given decimal floating-point format might or might not satisfy those regulations.
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.lang.c on Thu Jun 26 13:23:34 2025
    From Newsgroup: comp.lang.c

    On 6/26/2025 5:51 AM, Waldek Hebisch wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:

    IMHO, a need for a common name for IEEE binary128 exists for quite some
    time. For IEEE binary256 the real need didn't emerge yet. But it will
    emerge in the hopefully near future.

    A thought: the main advantage of binary types over decimal is supposed to
    be speed. Once you get up to larger precisions like that, the speed
    advantage becomes less clear, particularly since hardware support doesn’t >> seem forthcoming any time soon. There are already variable-precision
    decimal floating-point libraries available. And with such calculations, C
    no longer offers a great performance advantage over a higher-level
    language, so you might as well use the higher-level language.

    <https://docs.python.org/3/library/decimal.html>

    When working with such (low for me) precisions dynamic allocation
    of memory is major cost item, frequently more important than
    calculation. To avoid this cost one needs stack allocatation.
    That is one reason to make them built-in, as in this case
    compiler presumably knows about them and can better manage
    allocation and copies.

    Speaking of stack allocation... Fwiw, here is an older stack based
    region allocator of mine:

    https://pastebin.com/raw/f37a23918



    Also, when using binary underlying representation decimal rounding
    is much more expensive than binary one, so with such representation
    cost of decimal computation is significantly higher. Without
    hardware support making representation decimal makes computations
    for all sizes much more expensive.

    Floating point computations naturally are approximate. In most
    cases exact details of rounding do not matter much. It basically
    that with round to even rule one get somewhat better error
    propagation and people want to have a fixed rule to get
    reproducible results. But insisting on decimal rounding
    normally is not needed. To put it differently, decimal floating
    point is a marketing stint by IBM. Bored coders may code
    decimal software libraries for various languages, but it
    does not mean that such libraries have much sense.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Thu Jun 26 23:59:16 2025
    From Newsgroup: comp.lang.c

    On Thu, 26 Jun 2025 12:31:32 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:
    IMHO, a need for a common name for IEEE binary128 exists for quite
    some time. For IEEE binary256 the real need didn't emerge yet. But
    it will emerge in the hopefully near future.

    A thought: the main advantage of binary types over decimal is
    supposed to be speed. Once you get up to larger precisions like
    that, the speed advantage becomes less clear, particularly since
    hardware support doesn’t seem forthcoming any time soon. There are already variable-precision decimal floating-point libraries
    available. And with such calculations, C no longer offers a great performance advantage over a higher-level language, so you might as
    well use the higher-level language.

    <https://docs.python.org/3/library/decimal.html>

    I think there's an implicit assumption that, all else being equal,
    decimal is better than binary. That's true in some contexts,
    but not in all.

    My implicit assumption is that other sings being equal binary is
    better than anything else because it has the lowest variation in ULP to
    value ratio.
    The fact that other things being equal binary fp also tends to be
    faster is a nice secondary advantage. For example, it is easy to
    imagine hardware that implements S/360 style hex floating point as fast
    or a little faster than binary fp, but numerec properties of it are
    much worse then sane implementations of binary fp.
    Of course, historically there existed bad implementations of binary fp
    as weel, most notably on many CDC machines. But by now they are dead
    for eons.
    If you're performing calculations on physical quantities, decimal
    probably has no particular advantages, and binary is likely to be
    more efficient in both time and space.

    The advantagers of decimal show up if you're formatting a *lot*
    of numbers in human-readable form (but nobody has time to read a
    billion numbers), or if you're working with money. But for financial calculations, particularly compound interest, there are likely to
    be precise regulations about how to round results. A given decimal floating-point format might or might not satisfy those regulations.

    Exactly.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.lang.c on Thu Jun 26 21:09:37 2025
    From Newsgroup: comp.lang.c

    Michael S <already5chosen@yahoo.com> writes:
    On Thu, 26 Jun 2025 12:31:32 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote: =20
    IMHO, a need for a common name for IEEE binary128 exists for quite
    some time. For IEEE binary256 the real need didn't emerge yet. But
    it will emerge in the hopefully near future. =20

    A thought: the main advantage of binary types over decimal is
    supposed to be speed. Once you get up to larger precisions like
    that, the speed advantage becomes less clear, particularly since
    hardware support doesn=E2=80=99t seem forthcoming any time soon. There = >are
    already variable-precision decimal floating-point libraries
    available. And with such calculations, C no longer offers a great
    performance advantage over a higher-level language, so you might as
    well use the higher-level language.

    <https://docs.python.org/3/library/decimal.html> =20
    =20
    I think there's an implicit assumption that, all else being equal,
    decimal is better than binary. That's true in some contexts,
    but not in all.
    =20

    My implicit assumption is that other sings being equal binary is
    better than anything else because it has the lowest variation in ULP to
    value ratio.=20
    The fact that other things being equal binary fp also tends to be
    faster is a nice secondary advantage. For example, it is easy to
    imagine hardware that implements S/360 style hex floating point as fast
    or a little faster than binary fp, but numerec properties of it are
    much worse then sane implementations of binary fp.

    But not all decimal floating point implementations used "hex floating point".

    Burroughs medium systems had BCD floating point - one of the advantages
    was that it could exactly represent any floating point number that
    could be specified with a 100 digit mantissa and a 2 digit exponent.

    This was a memory-to-memory architecture, so no floating point registers
    to worry about.

    For financial calculations, a fixed point format (up to 100 digits) was
    used. Using an implicit decimal point, rounding was a matter of where
    the implicit decimal point was located in the up to 100 digit field;
    so do your calculations in mills and truncate the result field to the
    desired precision.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence D'Oliveiro@ldo@nz.invalid to comp.lang.c on Thu Jun 26 23:58:39 2025
    From Newsgroup: comp.lang.c

    On Thu, 26 Jun 2025 12:51:19 -0000 (UTC), Waldek Hebisch wrote:

    When working with such (low for me) precisions dynamic allocation of
    memory is major cost item, frequently more important than calculation.
    To avoid this cost one needs stack allocatation.

    What you may not realize is that, on current machines, there is about a
    100:1 speed difference between accessing CPU registers and accessing main memory.

    Whether that main memory access is doing “stack allocation” or “heap allocation” is going to make very little difference to this.

    Also, when using binary underlying representation decimal rounding
    is much more expensive than binary one, so with such representation
    cost of decimal computation is significantly higher.

    This may take more computation, but if the calculation time is dominated
    by memory access time to all those digits, how much difference is that
    going to make, really?

    Floating point computations naturally are approximate. In most cases
    exact details of rounding do not matter much.

    It often surprises you when they do. That’s why a handy rule of thumb is
    to test your calculation with all four IEEE 754 rounding modes, to ensure
    that the variation in the result remains minor. If it doesn’t ... then
    watch out.

    To put it differently, decimal floating point is a marketing stint by
    IBM.

    Not sure IBM has any marketing power left to inflict their own ideas on
    the computing industry. Decimal calculations just make sense because the results are less surprising to normal people.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Thu Jun 26 17:10:48 2025
    From Newsgroup: comp.lang.c

    scott@slp53.sl.home (Scott Lurndal) writes:
    [...]
    But not all decimal floating point implementations used "hex floating point".

    Burroughs medium systems had BCD floating point - one of the advantages
    was that it could exactly represent any floating point number that
    could be specified with a 100 digit mantissa and a 2 digit exponent.

    BCD uses 4 bits to represent values from 0 to 9. That's about 83%
    efficent relative to pure binary. (And it still can't represent 1/3.)

    Another option (I think IBM has implemented this) is to use 10 bits
    to represent values from 0 to 999, taking advantage of the nice
    coincidence that 2**10 is barely bigger than 10**3. That's more
    than 99.6% efficient relative to pure binary. Of course it's still
    more complicated to implement.

    [...]
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence D'Oliveiro@ldo@nz.invalid to comp.lang.c on Fri Jun 27 00:39:29 2025
    From Newsgroup: comp.lang.c

    On Thu, 26 Jun 2025 13:27:29 +0100, Richard Heathfield wrote:

    On 26/06/2025 10:01, Lawrence D'Oliveiro wrote:

    C no longer offers a great performance advantage over a higher-level
    language, so you might as well use the higher-level language.

    Nothing is stopping you, but then comp.lang.c no longer offers you the facility to discuss your chosen language, so you might as well use the higher-level language's group.

    Or conversely, if C is going to become more suitable for such high-
    precision calculations, it might need to become more Python-like.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.lang.c on Fri Jun 27 02:40:58 2025
    From Newsgroup: comp.lang.c

    On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:
    [...]if C is going to become more suitable for such high-
    precision calculations, it might need to become more Python-like.

    C is not in search of a reason to exist. If you want Python, you
    know where to find it.
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Fri Jun 27 04:33:00 2025
    From Newsgroup: comp.lang.c

    On 27.06.2025 02:10, Keith Thompson wrote:
    scott@slp53.sl.home (Scott Lurndal) writes:
    [...]
    But not all decimal floating point implementations used "hex floating point".

    Burroughs medium systems had BCD floating point - one of the advantages
    was that it could exactly represent any floating point number that
    could be specified with a 100 digit mantissa and a 2 digit exponent.

    BCD uses 4 bits to represent values from 0 to 9. That's about 83%
    efficent relative to pure binary. (And it still can't represent 1/3.)

    That's a problem of where your numbers stem from. "1/3" is a formula!
    A standard representation for a number may be "0.33" or "0.33333333",
    defined through the human-machine interface as text and representable
    (as depicted) as "exact number". The result of the formula "1/3" isn't representable as a finite decimal string. With a binary representation
    even a _finite_ [decimal] string might not be exactly representable in
    some cases; I've tried with 0.1 (for example). The fixed point decimal representation calculates that exact, but not the binary. I think that
    is the reason why especially in the financial sector using languages
    that are supporting the decimal encoding had been prevalent. (Don't
    know about contemporary financial systems.)

    Janis

    [...]


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.lang.c on Thu Jun 26 19:33:48 2025
    From Newsgroup: comp.lang.c

    On 6/26/2025 6:40 PM, Richard Heathfield wrote:
    On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:
    [...]if C is going to become more suitable for such high-
    precision calculations, it might need to become more Python-like.

    C is not in search of a reason to exist. If you want Python, you know
    where to find it.


    ditto.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.lang.c on Fri Jun 27 03:51:21 2025
    From Newsgroup: comp.lang.c

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Thu, 26 Jun 2025 12:51:19 -0000 (UTC), Waldek Hebisch wrote:

    When working with such (low for me) precisions dynamic allocation of
    memory is major cost item, frequently more important than calculation.
    To avoid this cost one needs stack allocatation.

    What you may not realize is that, on current machines, there is about a 100:1 speed difference between accessing CPU registers and accessing main memory.

    Whether that main memory access is doing “stack allocation” or “heap allocation” is going to make very little difference to this.

    Did you measure things? CPU has caches and cache friendly code
    makes a difference. Avoiding dynamic allocation helps, that is
    measurable. Rational explanation is that stack allocated things
    do not move and have close to zero cost to manage. Moving stuff
    leads to cache misses.

    Also, when using binary underlying representation decimal rounding
    is much more expensive than binary one, so with such representation
    cost of decimal computation is significantly higher.

    This may take more computation, but if the calculation time is dominated
    by memory access time to all those digits, how much difference is that
    going to make, really?

    It makes a lot of difference for cache friendly code.

    Floating point computations naturally are approximate. In most cases
    exact details of rounding do not matter much.

    It often surprises you when they do. That’s why a handy rule of thumb is to test your calculation with all four IEEE 754 rounding modes, to ensure that the variation in the result remains minor. If it doesn’t ... then watch out.

    To put it differently, decimal floating point is a marketing stint by
    IBM.

    Not sure IBM has any marketing power left to inflict their own ideas on
    the computing industry. Decimal calculations just make sense because the results are less surprising to normal people.

    Inteligent people quickly realise that floating point arithmetic
    produces approximate results. With binary this realisation is
    slightly faster, this is a plus for binary. Once you realise that
    you should expect approximate results, cases when result happens
    to be exact are surprising.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Fri Jun 27 13:44:25 2025
    From Newsgroup: comp.lang.c

    On 27/06/2025 05:51, Waldek Hebisch wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Thu, 26 Jun 2025 12:51:19 -0000 (UTC), Waldek Hebisch wrote:

    When working with such (low for me) precisions dynamic allocation of
    memory is major cost item, frequently more important than calculation.
    To avoid this cost one needs stack allocatation.

    What you may not realize is that, on current machines, there is about a
    100:1 speed difference between accessing CPU registers and accessing main
    memory.

    Whether that main memory access is doing “stack allocation” or “heap >> allocation” is going to make very little difference to this.

    Did you measure things? CPU has caches and cache friendly code
    makes a difference. Avoiding dynamic allocation helps, that is
    measurable. Rational explanation is that stack allocated things
    do not move and have close to zero cost to manage. Moving stuff
    leads to cache misses.


    Yes. Main memory accesses are slow - access to memory in caches is a
    lot less slow, but still slower than registers. If you need to use
    dynamic memory, the allocator will have to access a lot of different
    memory locations to figure out where to allocate the memory. Most of
    those will be in cache (assuming you are doing a lot of dynamic
    allocations), but some might not be. And the memory you allocate in the
    end might force more cache allocations and deallocations.

    Stack space (near the top of the stack), on the other hand, is almost
    always in caches. So it is faster to access memory on the stack, as
    well as using far fewer instructions.

    You are of course correct to say that speeds need to be measured, but
    you are also correct that in general, stack data can be significantly
    more efficient than dynamic memory data - especially if that data is short-lived.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Fri Jun 27 14:52:42 2025
    From Newsgroup: comp.lang.c

    On Thu, 26 Jun 2025 21:09:37 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    Michael S <already5chosen@yahoo.com> writes:
    On Thu, 26 Jun 2025 12:31:32 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote: =20
    IMHO, a need for a common name for IEEE binary128 exists for
    quite some time. For IEEE binary256 the real need didn't emerge
    yet. But it will emerge in the hopefully near future. =20

    A thought: the main advantage of binary types over decimal is
    supposed to be speed. Once you get up to larger precisions like
    that, the speed advantage becomes less clear, particularly since
    hardware support doesn=E2=80=99t seem forthcoming any time soon.
    There =
    are
    already variable-precision decimal floating-point libraries
    available. And with such calculations, C no longer offers a great
    performance advantage over a higher-level language, so you might
    as well use the higher-level language.

    <https://docs.python.org/3/library/decimal.html> =20
    =20
    I think there's an implicit assumption that, all else being equal,
    decimal is better than binary. That's true in some contexts,
    but not in all.
    =20

    My implicit assumption is that other sings being equal binary is
    better than anything else because it has the lowest variation in ULP
    to value ratio.=20
    The fact that other things being equal binary fp also tends to be
    faster is a nice secondary advantage. For example, it is easy to
    imagine hardware that implements S/360 style hex floating point as
    fast or a little faster than binary fp, but numerec properties of it
    are much worse then sane implementations of binary fp.

    But not all decimal floating point implementations used "hex floating
    point".


    IBM's Hex floating point is not decimal. It's hex (base 16).

    Burroughs medium systems had BCD floating point - one of the
    advantages was that it could exactly represent any floating point
    number that could be specified with a 100 digit mantissa and a 2
    digit exponent.

    This was a memory-to-memory architecture, so no floating point
    registers to worry about.

    For financial calculations, a fixed point format (up to 100 digits)
    was used. Using an implicit decimal point, rounding was a matter of
    where the implicit decimal point was located in the up to 100 digit
    field; so do your calculations in mills and truncate the result field
    to the desired precision.


    For fix point, anything "decimal" is even less useful than in floating
    point. I can't find any good explanation for use of "decimal" things in
    some early computers except that their designers were, may be, good
    engineers, but 2nd rate thinkers.








    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.lang.c on Fri Jun 27 14:01:10 2025
    From Newsgroup: comp.lang.c

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Thu, 26 Jun 2025 12:51:19 -0000 (UTC), Waldek Hebisch wrote:

    When working with such (low for me) precisions dynamic allocation of
    memory is major cost item, frequently more important than calculation.
    To avoid this cost one needs stack allocatation.

    What you may not realize is that, on current machines, there is about a >100:1 speed difference between accessing CPU registers and accessing main >memory.

    Depends on whether you're accessing cache (3 or 4 cycle latency for L1),
    and at what cache level. Even a DRAM access can complete in less than
    100 ns.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.lang.c on Fri Jun 27 14:02:54 2025
    From Newsgroup: comp.lang.c

    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
    scott@slp53.sl.home (Scott Lurndal) writes:
    [...]
    But not all decimal floating point implementations used "hex floating point".

    Burroughs medium systems had BCD floating point - one of the advantages
    was that it could exactly represent any floating point number that
    could be specified with a 100 digit mantissa and a 2 digit exponent.

    BCD uses 4 bits to represent values from 0 to 9. That's about 83%
    efficent relative to pure binary. (And it still can't represent 1/3.)

    Ah, but reading a BCD memory dump is a joy compared to a binary system :-)

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Fri Jun 27 20:48:23 2025
    From Newsgroup: comp.lang.c

    On 27.06.2025 13:52, Michael S wrote:
    On Thu, 26 Jun 2025 21:09:37 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:
    [..]

    For fix point, anything "decimal" is even less useful than in floating
    point. I can't find any good explanation for use of "decimal" things in
    some early computers [...]

    If not already obvious from the hints given in this thread you can
    search for the respective keywords.

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Fri Jun 27 17:56:33 2025
    From Newsgroup: comp.lang.c

    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 27.06.2025 02:10, Keith Thompson wrote:
    scott@slp53.sl.home (Scott Lurndal) writes:
    [...]
    But not all decimal floating point implementations used "hex floating point".

    Burroughs medium systems had BCD floating point - one of the advantages
    was that it could exactly represent any floating point number that
    could be specified with a 100 digit mantissa and a 2 digit exponent.

    BCD uses 4 bits to represent values from 0 to 9. That's about 83%
    efficent relative to pure binary. (And it still can't represent 1/3.)

    That's a problem of where your numbers stem from. "1/3" is a formula!

    1/3 is also a C expression with the value 0. But what I was
    referring to was the real number 1/3, the unique real number that
    yields one when multiplied by three.

    My point is that any choice of radix in a floating-point format
    means that there are going to be some useful real numbers you
    can't represent. That's as true of decimal as it is of binary.
    (Trinary can represent 1/3, but can't represent 1/2.)

    Decimal can represent any number that can be exactly represented in
    binary *if* you have enough digits (because 10 is multiple of 2),
    and many numbers like 0.1 that can't be represented exactly in
    binary, but at a cost -- that is worth paying in some contexts.
    (Scaled integers might sometimes be a good alternative).

    I doubt that I'm saying anything you don't already know. I just
    wanted to clarify what I meant.

    [...]
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From BGB@cr88192@gmail.com to comp.lang.c on Sat Jun 28 14:16:18 2025
    From Newsgroup: comp.lang.c

    On 6/26/2025 8:40 PM, Richard Heathfield wrote:
    On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:
    [...]if C is going to become more suitable for such high-
    precision calculations, it might need to become more Python-like.

    C is not in search of a reason to exist. If you want Python, you know
    where to find it.


    FWIW, in my compiler I had added support for dynamic/variant types as an extension. Don't end up using it all that often though.

    But, yeah, could in premise support bigint or bigfloat representations
    through this mechanism. Though, big downside would mean they would need
    to be heap-backed.

    There were two subtypes of the variant type:
    64-bit: More common, uses high bits as type tag.
    Supports 62 bit fixnum and flonum types.
    128-bit: Rare, but can deal directly with larger values.
    Supports 124 bit fixnum and flonum types.


    Though, had also added _BitInt(n) with support for large integer types,
    though divided into categories based on n:
    n= 1.. 64:
    Represented in a normal inter register.
    Memory storage size is one of the power-of-2 sizes.
    Uses native operations;
    May sign or zero extend values as needed.
    n= 65..128:
    Represented in a register pair;
    May use 64 or 128 bit operations as available.
    n=129..256:
    Represented as a pointer to a memory object (32 bytes);
    Uses runtime calls for 256-bit operations;
    ...
    n larger than 256:
    Represented as a pointer to a memory object;
    Size is padded up to the next multiple of 128 bits.
    Uses runtime calls that deal with variable-length values.

    ...

    For large integers, _BitInt(n) would be preferable as it wouldn't need
    to heap-back values (unless they got so large as that the compiler
    switches to heap backed objects). Though, IIRC, the biggest supported
    BitInt size at present is 16384 bits (2048 bytes), which is below the
    cutoff where the compiler heap-allocates things.


    No support for big floating-point types ("_BitFloat"?), would need to
    come up with some way to deal with them, if they were to be added.

    The exact pattern is unclear, though one likely option is to assume that
    all FPU types larger than 256 bit and above have a 31 bit exponent
    (assertion: Probably no one is going to need an exponent bigger than this).

    So, say:
    Binary16 : S.E5.M10
    Binary32 : S.E8.M23
    Binary64 : S.E11.M52
    Binary128: S.E15.M112
    Binary256: S.E19.M236
    Binary512: S.E31.M480 (?)
    ...

    Some intermediate sizes, if supported, could be assumed to be a
    truncated form of the next size up. So, for example, if one requests a
    48-bit floating point type, it is assumed to be a Binary64 with
    low-order bits cut off (with the storage size padded up in the same way
    as for _BitInt).


    One downside is, as it exists, there is no way to address types being non-power-of-2 in memory. As noted, _BitInt as implemented pads up the storage.

    So, say:
    v=*(_BitInt(48) *)ptr1;
    *(_BitInt(48) *)ptr2=v;
    Would be ambiguous, and as-is would just use 64-bit loads and stores.


    It might be desirable in some cases to express a type that is N bits
    in-memory (possibly along with having an explicit endianess).

    Looking it up online, apparently Clang uses ((n+7)/8) bytes of storage
    for _BitInt, rather than padding up the storage. I guess I could
    consider this, though likely only to be done this way in the case of an explicit pointer de-reference. Though, this would make a messy corner
    case for dealing with arrays (do arrays and pointer ops use the padded
    or non-padded size).

    As-is, my compiler would assume the padded-up sizes.


    If it were just up to me, might add some more types:
    _SBitIntLE(n), signed bitint, exact bytes, little endian
    _UBitIntLE(n), unsigned bitint, exact bytes, little endian
    _SBitIntBE(n), signed bitint, exact bytes, big endian
    _UBitIntBE(n), unsigned bitint, exact bytes, big endian

    Where in this case the LE/BE qualifier also implies that it is an
    in-memory format.

    Though, my compiler had also added "__ltlendian" and "__bigendian"
    modifiers, so, it is possible that no new type is added per se, but
    rather that if _BitInt is used with one of:
    __ltlendian, __bigendian, __packed
    It is assumed to use a byte-sized.

    Well, or just use __packed as the qualifier.

    So, say:
    v=*(_BitInt(48) *)ptr1; //64b load
    *(_BitInt(48) *)ptr2=v; //64b store
    But:
    v=*(__packed _BitInt(48) *)ptr1; //48b (6 byte) load
    *(__packed _BitInt(48) *)ptr2=v; //48b (6 byte) store


    Wouldn't want to default to using byte-exact storage as (at least on my
    ISA and also RISC-V) doing byte-exact loads/stores would have a
    significant performance and code-size penalty. Though, realistically,
    only store may need to be special cased.




    Also, I am left realizing I have a non-zero temptation for an:
    S.E7.F8 16-bit format.

    Ironically, partly because:
    It better fits typical use cases than S.E8.F7;
    The use of an 8-bit mantissa makes it easier to use head math.
    Though... Am I the only person to ever head-math this way?
    Namely, representing some values in-mind as 4 hex digits.
    Though, my mental arithmetic skills still kinda suck, but still.


    Can't find much mention of anyone else using FP or non-decimal systems
    for mental arithmetic, but in this case, having the mantissa as an exact number of hex digits makes it easier (my mental processes for dealing
    with numbers don't like bitfields not being aligned to multiples of 4
    bits). Though, if doing this stuff, often extreme corner cutting was
    needed, as mental math skills suck (easier to turn x*y => x+y-0x3F00 or similar, and just live with the inaccuracy, ...).

    And, for things like divide, like, no, I am not going to try running Newton-Raphson in head-math (also I suck at things like long-division).

    I can also think readily in decimal as well, though digits are at a
    premium (there being a fairly high per-digit cost). I think my mind
    deals with it in a BCD like form (with a mental division partly because
    of the comparably high cost of converting between decimal and
    hexadecimal numbers).


    For "guesstimating" a number (in integer form), can do something
    analogous to figuring out the log-2 of the divisor and then
    right-shifting the left input by this amount. Like, usually if one can
    guess within a power-of-2 of the actual value, often good enough.

    So, not many accurate answers, but yeah.
    Also I recently noticed that if try to mental hexadecimal
    multiplication, my mind has a notably harder time with multiples of 3,
    6, and 7 (to such a point that I had to fall back onto using iterative addition). Then again, I had most often used multiples of a power-of-2
    row (1/2/4/8), so it stands to reason that other rows would be harder.

    Can't seem to find much information about what is normal or standard
    here, beyond just references to the sorts of stuff people typically
    teach in school.

    I was just left thinking about this recently.

    But, then seemingly my own thinking processes seem weird enough that it
    almost seems like I am just making stuff up.

    Though, there is often some amount of "weird analog stuff" in all this
    as well.


    Can note though, that I kinda suck at traditional "symbolic
    manipulation" tasks, things like pretty much everything is routed
    through visual/spatial representations, but even this is non-standard
    (not so much colorful graphics or real-world objects, whole lots more abstracted glyphs and monochrome though; and lots of text).

    It is often more like I am dealing with thoughts like it is all text in
    a text-editor (within in a space of floating windows of text and
    similar). Actually, not too much different than a typical PC desktop,
    just more 3D and with pretty much everything in "dark mode" (and with
    less use of color). Also pretty much all of the text looks like 8x8
    fonts, and imagined objects seem to often have what looks like pixel
    artifacts (like everything is actually at a fairly low resolution as well).

    And, there seems to be one part of my mind that doesn't like dealing
    with more than 8 digits of input (2x 4-digit in, 4 digit out). And, say, keeping a 12 or 16 digit number in this part of working-memory doesn't
    work so well.

    Like, my thinking processes are kinda janky and suck (and have seemingly gotten worse over the years as well).

    ...


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence D'Oliveiro@ldo@nz.invalid to comp.lang.c on Sat Jun 28 23:59:11 2025
    From Newsgroup: comp.lang.c

    On Fri, 27 Jun 2025 14:52:42 +0300, Michael S wrote:

    For fix point, anything "decimal" is even less useful than in floating
    point. I can't find any good explanation for use of "decimal" things in
    some early computers except that their designers were, may be, good engineers, but 2nd rate thinkers.

    IEEE-754 now includes decimal floating-point formats in addition to the
    older binary ones. I think this was originally a separate spec (IEEE-854),
    but it got rolled into the 2008 revision of IEEE-754.

    Many numeric experts scoffed at IEEE-754 when it first came out,
    particularly the features that reduced the surprise factor for less-expert users. Decimal arithmetic is more of the same.

    Safety-razor syndrome never quite goes away, does it ...
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Sun Jun 29 05:03:30 2025
    From Newsgroup: comp.lang.c

    [ Some technical troubles - in case this post appeared already 30
    minutes ago (I don't see it), please ignore this re-sent post. ]

    On 28.06.2025 02:56, Keith Thompson wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 27.06.2025 02:10, Keith Thompson wrote:
    scott@slp53.sl.home (Scott Lurndal) writes:
    [...]
    But not all decimal floating point implementations used "hex
    floating point".

    Burroughs medium systems had BCD floating point - one of the advantages >>>> was that it could exactly represent any floating point number that
    could be specified with a 100 digit mantissa and a 2 digit exponent.

    BCD uses 4 bits to represent values from 0 to 9. That's about 83%
    efficent relative to pure binary. (And it still can't represent 1/3.)

    That's a problem of where your numbers stem from. "1/3" is a formula!

    1/3 is also a C expression with the value 0. But what I was
    referring to was the real number 1/3, the unique real number that
    yields one when multiplied by three.

    Yes, sure. That was also how I interpreted it; that you meant (in
    "C" parlance) 1.0/3.0.


    My point is that any choice of radix in a floating-point format
    means that there are going to be some useful real numbers you
    can't represent.

    Yes, sure. Sqrt(2.0) for example, or 'pi', or 'e', or your 1.0/3.0
    example. These numbers have in common that there's no finite length
    standard representation; you usually represent them as formulas (as
    in your example), or in computers as constants in abbreviated form.

    In numerics you have various places where errors appear in principle
    and accumulate. One of the errors is when transferred from (and to)
    external representation. Another one is when performing calculations
    with internally imprecise represented numbers.

    The point with decimal encoding addresses the lossless (and fast[*]) input/output of given [finite] numbers. Numbers that have been (and
    are) used e.g. in financial contexts (Billions of Euros and Cents).
    And you can also perform exact arithmetic in the typical operations
    (sum, multiply, subtract)[**] without errors.[***]

    Nowadays (with 64 bit integer arithmetic)[****] quasi as "standard"
    you could of course also use an integer-based fixed point arithmetic
    to handle large amounts with cent-value precision arithmetics (or
    similar finite numbers of real world entities).

    As an anecdotal add-on: There was once a fraud case where someone
    from the financial sector took all the (positive) sub-cent rounding
    factors from all transactions and accumulated them to transfer them
    to his own account. If you know how much money there's transferred
    you can imagine how fast you could get a multi-millionaire that way.

    [*] But that factor is probably and IMO not that important nowadays.

    [**] When you do statistics with division necessary, or things like
    compounded interest you cannot avoid rounding at some decimal place;
    but that are "local" effects. At this point numerics provides a lot
    more stuff (WRT errors and propagation) that has to be considered.

    [***] Try adding 10 millions of 10 cents values (0.10) in "C" using
    a binary 'float' type; you'll notice a fast drift away from the exact
    sum.
    c=9296503 f=1000000.062500 value reached with fewer terms c=10000001 f=1087937.125000 too large value at exact terms

    [****] Processor word sizes not that common, let alone guaranteed,
    in legacy systems.

    That's as true of decimal as it is of binary.
    (Trinary can represent 1/3, but can't represent 1/2.)

    Decimal can represent any number that can be exactly represented in
    binary *if* you have enough digits (because 10 is multiple of 2),
    and many numbers like 0.1 that can't be represented exactly in
    binary, but at a cost -- that is worth paying in some contexts.
    (Scaled integers might sometimes be a good alternative).

    I doubt that I'm saying anything you don't already know. I just
    wanted to clarify what I meant.

    Thanks. Yes.

    Please see my additions above also (mainly) just as clarification,
    especially in the light of some people despising the decimal format
    (and also the folks who invented it back then).

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From James Kuyper@jameskuyper@alumni.caltech.edu to comp.lang.c on Sat Jun 28 23:18:40 2025
    From Newsgroup: comp.lang.c

    On 2025-06-28 23:03, Janis Papanagnou wrote:
    [ Some technical troubles - in case this post appeared already 30
    minutes ago (I don't see it), please ignore this re-sent post. ]

    On 28.06.2025 02:56, Keith Thompson wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 27.06.2025 02:10, Keith Thompson wrote:
    ...
    BCD uses 4 bits to represent values from 0 to 9. That's about 83%
    efficent relative to pure binary. (And it still can't represent 1/3.)

    That's a problem of where your numbers stem from. "1/3" is a formula!

    1/3 is also a C expression with the value 0. But what I was
    referring to was the real number 1/3, the unique real number that
    yields one when multiplied by three.

    Yes, sure. That was also how I interpreted it; that you meant (in
    "C" parlance) 1.0/3.0.

    No, it is very much the point that the C expression 1.0/3.0 cannot have
    the value he's talking about (except in the unlikely event that
    FLT_RADIX is a multiple of 3).
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Sat Jun 28 20:37:57 2025
    From Newsgroup: comp.lang.c

    James Kuyper <jameskuyper@alumni.caltech.edu> writes:
    On 2025-06-28 23:03, Janis Papanagnou wrote:
    [ Some technical troubles - in case this post appeared already 30
    minutes ago (I don't see it), please ignore this re-sent post. ]

    On 28.06.2025 02:56, Keith Thompson wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 27.06.2025 02:10, Keith Thompson wrote:
    ...
    BCD uses 4 bits to represent values from 0 to 9. That's about 83%
    efficent relative to pure binary. (And it still can't represent 1/3.) >>>>
    That's a problem of where your numbers stem from. "1/3" is a formula!

    1/3 is also a C expression with the value 0. But what I was
    referring to was the real number 1/3, the unique real number that
    yields one when multiplied by three.

    Yes, sure. That was also how I interpreted it; that you meant (in
    "C" parlance) 1.0/3.0.

    No, it is very much the point that the C expression 1.0/3.0 cannot have
    the value he's talking about (except in the unlikely event that
    FLT_RADIX is a multiple of 3).

    Exactly -- or perhaps I should say precisely.
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Sat Jun 28 20:51:24 2025
    From Newsgroup: comp.lang.c

    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 28.06.2025 02:56, Keith Thompson wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 27.06.2025 02:10, Keith Thompson wrote:
    scott@slp53.sl.home (Scott Lurndal) writes:
    [...]
    But not all decimal floating point implementations used "hex
    floating point".

    Burroughs medium systems had BCD floating point - one of the advantages >>>>> was that it could exactly represent any floating point number that
    could be specified with a 100 digit mantissa and a 2 digit exponent.

    BCD uses 4 bits to represent values from 0 to 9. That's about 83%
    efficent relative to pure binary. (And it still can't represent 1/3.)

    That's a problem of where your numbers stem from. "1/3" is a formula!

    1/3 is also a C expression with the value 0. But what I was
    referring to was the real number 1/3, the unique real number that
    yields one when multiplied by three.

    Yes, sure. That was also how I interpreted it; that you meant (in
    "C" parlance) 1.0/3.0.

    As mentioned elsethread, I was referring to the real value.
    1.0/3.0 as a C expression yields a value of type double, typically 0.333333333333333314829616256247390992939472198486328125 or 0x1.5555555555555p-2 (Unless FLT_RADIX is a multiple of 3, which
    pretty much never happens.)

    My point is that any choice of radix in a floating-point format
    means that there are going to be some useful real numbers you
    can't represent.

    Yes, sure. Sqrt(2.0) for example, or 'pi', or 'e', or your 1.0/3.0
    example. These numbers have in common that there's no finite length
    standard representation; you usually represent them as formulas (as
    in your example), or in computers as constants in abbreviated form.

    Again, by 1/3 I meant the real number that is the mathematical result of
    that formula, and of 2/6, and of 1-2/3, not the formula itself.

    In numerics you have various places where errors appear in principle
    and accumulate. One of the errors is when transferred from (and to)
    external representation. Another one is when performing calculations
    with internally imprecise represented numbers.

    The point with decimal encoding addresses the lossless (and fast[*]) input/output of given [finite] numbers. Numbers that have been (and
    are) used e.g. in financial contexts (Billions of Euros and Cents).
    And you can also perform exact arithmetic in the typical operations
    (sum, multiply, subtract)[**] without errors.[***]

    Which is convenient only because we happen to use decimal notation
    when writing numbers.

    [...]
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Sun Jun 29 04:44:47 2025
    From Newsgroup: comp.lang.c

    On 2025-06-26, Richard Heathfield <rjh@cpax.org.uk> wrote:
    On 26/06/2025 10:01, Lawrence D'Oliveiro wrote:
    C
    no longer offers a great performance advantage over a higher-level
    language, so you might as well use the higher-level language.

    Nothing is stopping you, but then comp.lang.c no longer offers
    you the facility to discuss your chosen language, so you might as
    well use the higher-level language's group.

    Even a broken clock is right once or twice in a 24h period.

    He did say that this advantage was in the manipulation
    of multi-precision integers, like big decimals.

    Indeed, most of the time is spent int the math routines themselves, not in
    what dispatches them, Calculations written in C, using a certain bignum libarary won't be much faster than the same calculations in a higher level language, using the same bignum library.

    A higher level language may also have a compiler which does
    optimizations on the bignum code, such as CSE and constant folding,
    basically treating it the same like fixnum integers.

    C code consisting of calls into a bignum library will not be
    aggressively optimized. If you wastefully perform a calculation
    with constants that could be done at compile time, it almost
    certainly won't be.

    Example:

    (compile-toplevel '(expt 2 150))
    #<sys:vm-desc: a103620>
    (disassemble *1)
    data:
    0: 1427247692705959881058285969449495136382746624
    syms:
    code:
    0: 10000400 end d0
    instruction count:
    1
    #<sys:vm-desc: a103620>

    The compiled code just retrieves the bignum integer result from static data register d0. This is just from the compiler finding "expt" to be in a list of functions that are reducible at compile time over constant inputs; no special reasoning about large integers.

    But if you were to write the C code to initialize a bignum from 5, and one from 150, and then call the bignum exponentiation routine, I doubt you'd get the compiler to optimize all that away.

    Maybe with a sufficiently advanced link-time optimization ...
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From James Kuyper@jameskuyper@alumni.caltech.edu to comp.lang.c on Sun Jun 29 09:23:01 2025
    From Newsgroup: comp.lang.c

    On 2025-06-28 19:59, Lawrence D'Oliveiro wrote:
    On Fri, 27 Jun 2025 14:52:42 +0300, Michael S wrote:

    For fix point, anything "decimal" is even less useful than in floating
    point. I can't find any good explanation for use of "decimal" things in
    some early computers except that their designers were, may be, good
    engineers, but 2nd rate thinkers.

    IEEE-754 now includes decimal floating-point formats in addition to the older binary ones. I think this was originally a separate spec (IEEE-854), but it got rolled into the 2008 revision of IEEE-754.

    It's somewhat more complicated than that. IEEE-784 is a
    radix-independent standard, otherwise equivalent to IEEE-754. Basically, IEEE-754 is "IEEE-784 with radix==2". A conforming implementation of any version of C could also have used "IEEE-784 with radix==10". However,
    the decimal floating point formats added to IEEE-754 in 2008 were not
    simply "IEEE-784 with radix==10", and therefore could not have been used
    as standard floating types in earlier versions of the C standard. See <https://en.wikipedia.org/wiki/Decimal_floating_point> for more details.
    There are real systems that implement these new formats in hardware. A
    lot of wording was added and changed in the C standard to allow these
    new formats to be used as C's new decimal floating types.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Sun Jun 29 17:13:36 2025
    From Newsgroup: comp.lang.c

    On Sun, 29 Jun 2025 04:44:47 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:

    On 2025-06-26, Richard Heathfield <rjh@cpax.org.uk> wrote:
    On 26/06/2025 10:01, Lawrence D'Oliveiro wrote:
    C
    no longer offers a great performance advantage over a higher-level
    language, so you might as well use the higher-level language.

    Nothing is stopping you, but then comp.lang.c no longer offers
    you the facility to discuss your chosen language, so you might as
    well use the higher-level language's group.

    Even a broken clock is right once or twice in a 24h period.

    He did say that this advantage was in the manipulation
    of multi-precision integers, like big decimals.

    Indeed, most of the time is spent int the math routines themselves,
    not in what dispatches them, Calculations written in C, using a
    certain bignum libarary won't be much faster than the same
    calculations in a higher level language, using the same bignum
    library.


    I did few "native" python vs python+GMP vs C+GMP multiplication
    measurement ~6 months ago.
    See <20241225110505.00001733@yahoo.com> and followup
    The end result is that python always loses to C+GMP by significant
    margin except for case of python+GMP combo with absolutely huge
    numbers, much bigger ones than I would ever expect to use outside
    of benchmarks, where it comes close.
    "Native" python loses especially badly at bigger numbers because of
    less sophisticated algorithms.
    Python+GMP loses especially badly at smaller numbers. I do not know an
    exact reason, but would guess that it's somehow related to differences
    in memory management between python and GMP.

    A higher level language may also have a compiler which does
    optimizations on the bignum code, such as CSE and constant folding,
    basically treating it the same like fixnum integers.

    C code consisting of calls into a bignum library will not be
    aggressively optimized. If you wastefully perform a calculation
    with constants that could be done at compile time, it almost
    certainly won't be.

    Example:

    (compile-toplevel '(expt 2 150))
    #<sys:vm-desc: a103620>
    (disassemble *1)
    data:
    0: 1427247692705959881058285969449495136382746624
    syms:
    code:
    0: 10000400 end d0
    instruction count:
    1
    #<sys:vm-desc: a103620>

    The compiled code just retrieves the bignum integer result from
    static data register d0. This is just from the compiler finding
    "expt" to be in a list of functions that are reducible at compile
    time over constant inputs; no special reasoning about large integers.

    But if you were to write the C code to initialize a bignum from 5,
    and one from 150, and then call the bignum exponentiation routine, I
    doubt you'd get the compiler to optimize all that away.

    Maybe with a sufficiently advanced link-time optimization ...


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Sun Jun 29 20:40:34 2025
    From Newsgroup: comp.lang.c

    On 29.06.2025 05:51, Keith Thompson wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 28.06.2025 02:56, Keith Thompson wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 27.06.2025 02:10, Keith Thompson wrote:
    scott@slp53.sl.home (Scott Lurndal) writes:
    [...]
    But not all decimal floating point implementations used "hex
    floating point".

    Burroughs medium systems had BCD floating point - one of the advantages >>>>>> was that it could exactly represent any floating point number that >>>>>> could be specified with a 100 digit mantissa and a 2 digit exponent. >>>>>
    BCD uses 4 bits to represent values from 0 to 9. That's about 83%
    efficent relative to pure binary. (And it still can't represent 1/3.) >>>>
    That's a problem of where your numbers stem from. "1/3" is a formula!

    1/3 is also a C expression with the value 0. But what I was
    referring to was the real number 1/3, the unique real number that
    yields one when multiplied by three.

    Yes, sure. That was also how I interpreted it; that you meant (in
    "C" parlance) 1.0/3.0.

    As mentioned elsethread, I was referring to the real value.

    Yes, me too, when I saw your original 1/3. - You *then* spoke about
    that being 0 in "C" (with integer division) I explained that I took
    it as what I still think was what you were saying with "1/3" being
    the real value, but (since to address your 1/3==0) I explained that
    I meant the real value (that you would get in "C" [approximately]
    by 1.0/3.0, which of course differs from the real math number).

    I guess we might have been talking cross-purpose.

    What I was trying to explain were different things on different
    levels.

    a) Errors on input/output conversion.

    the value 1.33 - BCD no errors, two's-complement binary w/ errors;
    the real value 1.333333... - generally an error (infinite string)
    0.10 - in BCD no errors, in binary errors;

    b) Errors in calculations.

    all exact internal representation of external quantities can be
    calculated correctly (with the previously presented conditions)
    in decimal; examples 0.10, 1.33, 1.33333333333333333333333, but
    *not* 1.33333333333333333333333... (the infinite form, whether
    expressed as depicted here with '...' or whether expressed as
    formula '1/3'.

    1.0/3.0 as a C expression yields a value of type double, typically 0.333333333333333314829616256247390992939472198486328125 or

    There are numbers that can be expressed accurately in binary; as
    0.5, 1.0, 2.0 (for example). Those can also be expressed accurately
    with decimal encoding.

    Other finite numbers/number-sequences can be expressed accurately
    with decimal encoding, as 0.1, 1.33 (for example), but only specific
    ones can be represented accurately with binary encoding.

    With infinite sequences of digits you will have problems with both
    internal representations (binary, decimal); as you see with specific
    real values as 'sqrt(2)', 'pi', 'e', '1/3' (for example) which are
    cut at some decimal place internally depending on supported "register
    width".

    [...]

    In numerics you have various places where errors appear in principle
    and accumulate. One of the errors is when transferred from (and to)
    external representation. Another one is when performing calculations
    with internally imprecise represented numbers.

    The point with decimal encoding addresses the lossless (and fast[*])
    input/output of given [finite] numbers. Numbers that have been (and
    are) used e.g. in financial contexts (Billions of Euros and Cents).
    And you can also perform exact arithmetic in the typical operations
    (sum, multiply, subtract)[**] without errors.[***]

    Which is convenient only because we happen to use decimal notation
    when writing numbers.

    But that exactly is the point! With decimal encoding you get an exact
    internal picture of the external representations of the numbers, if
    only because the external representations are finite. (The same holds
    for the output.) With binary encoding you have the first degradation
    during that I/O process. Decimal encoding, OTOH, is robust here.

    That's why it's so advantageous specifically for the financial sector.
    It would not be the best choice where a lot of internal calculations
    are done, as (for example) in calculating hydrodynamic processes.

    Later, when it comes to internal calculations, yet more deficiencies
    appear (with both encodings; but decimal is more robust in the basic operations, where in binary the previous errors contribute to further degradation).

    (I completely left out algorithmic error management here (numerics),
    because it applies in principle to all algorithms [mostly] independent
    of the encoding; this would go too far.)

    BTW, not only mainframes and the major programming languages used for
    financial software supported decimal encoding. Also pocket calculators
    did that. (For example, the BASIC programmable and interactive usable
    Sharp PC 1401 supported real numbers processing using decimal encoding
    (10 digits visible BCD, and 2 "hidden" digits for internal rounding, 2
    digits exponent, plus information for signs, etc., all in all 8 bytes; implemented with in-memory calculations, not done in registers.)

    Decimal encoding; it's fast, has good properties (WRT errors and error propagation), but requires more space (in case that matters).

    Janis


    [...]


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Sun Jun 29 20:48:13 2025
    From Newsgroup: comp.lang.c

    On 29.06.2025 05:18, James Kuyper wrote:
    On 2025-06-28 23:03, Janis Papanagnou wrote:
    [ Some technical troubles - in case this post appeared already 30
    minutes ago (I don't see it), please ignore this re-sent post. ]

    On 28.06.2025 02:56, Keith Thompson wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 27.06.2025 02:10, Keith Thompson wrote:
    ...
    BCD uses 4 bits to represent values from 0 to 9. That's about 83%
    efficent relative to pure binary. (And it still can't represent 1/3.) >>>>
    That's a problem of where your numbers stem from. "1/3" is a formula!

    1/3 is also a C expression with the value 0. But what I was
    referring to was the real number 1/3, the unique real number that
    yields one when multiplied by three.

    Yes, sure. That was also how I interpreted it; that you meant (in
    "C" parlance) 1.0/3.0.

    No, it is very much the point that the C expression 1.0/3.0 cannot have
    the value he's talking about [...]

    I was talking about the Real Value. Indicated by the formula '1/3'.
    When Keith spoke about that being '0' I refined it to '1.0/3.0' to
    address this misunderstanding. (That's all to say here about that.)

    (For the _main points_ I tried to express I refer you to the longer
    post I just posted in reply to Keith's post.)

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Sun Jun 29 20:52:26 2025
    From Newsgroup: comp.lang.c

    In my posts, all values of the form "1.3333..." (or similar) should
    of course have been "0.3333..." (or the respective forms), as a
    representation of '1/3'. - Sorry for the typos!

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.lang.c on Sun Jun 29 12:14:30 2025
    From Newsgroup: comp.lang.c

    On 6/29/2025 11:52 AM, Janis Papanagnou wrote:
    In my posts, all values of the form "1.3333..." (or similar) should
    of course have been "0.3333..." (or the respective forms), as a representation of '1/3'. - Sorry for the typos!

    .(3)? ;^)
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From James Kuyper@jameskuyper@alumni.caltech.edu to comp.lang.c on Mon Jun 30 21:59:32 2025
    From Newsgroup: comp.lang.c

    On 2025-06-29 14:48, Janis Papanagnou wrote:
    On 29.06.2025 05:18, James Kuyper wrote:
    On 2025-06-28 23:03, Janis Papanagnou wrote:
    [ Some technical troubles - in case this post appeared already 30
    minutes ago (I don't see it), please ignore this re-sent post. ]

    On 28.06.2025 02:56, Keith Thompson wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 27.06.2025 02:10, Keith Thompson wrote:
    ...
    BCD uses 4 bits to represent values from 0 to 9. That's about 83%
    efficent relative to pure binary. (And it still can't represent 1/3.) >>>>>
    That's a problem of where your numbers stem from. "1/3" is a formula! >>>>
    1/3 is also a C expression with the value 0. But what I was
    referring to was the real number 1/3, the unique real number that
    yields one when multiplied by three.

    Yes, sure. That was also how I interpreted it; that you meant (in
    "C" parlance) 1.0/3.0.

    No, it is very much the point that the C expression 1.0/3.0 cannot have
    the value he's talking about [...]

    I was talking about the Real Value. Indicated by the formula '1/3'.
    When Keith spoke about that being '0' I refined it to '1.0/3.0' to
    address this misunderstanding. (That's all to say here about that.)

    The real number 1/3 has a different value from the C expression 1/3
    (which is 0), and also from the C expression 1.0/3.0 (unless FLT_RADIX
    is a multiple of 3). It only spreads confusion to refer to 1.0/3.0 as if
    it had the value that Keith was talking about.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence D'Oliveiro@ldo@nz.invalid to comp.lang.c on Tue Jul 15 19:41:51 2025
    From Newsgroup: comp.lang.c

    On Fri, 27 Jun 2025 02:40:58 +0100, Richard Heathfield wrote:

    On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:

    [...]if C is going to become more suitable for such high-
    precision calculations, it might need to become more Python-like.

    C is not in search of a reason to exist.

    Not in traditional fixed-precision arithmetic, anyway -- at least after it fully embraced IEEE 754.

    With higher-precision arithmetic, on the other hand, the traditional C paradigms may not be so suitable.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.lang.c on Wed Jul 16 03:55:14 2025
    From Newsgroup: comp.lang.c

    On 15/07/2025 20:41, Lawrence D'Oliveiro wrote:
    On Fri, 27 Jun 2025 02:40:58 +0100, Richard Heathfield wrote:

    On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:

    [...]if C is going to become more suitable for such high-
    precision calculations, it might need to become more Python-like.

    C is not in search of a reason to exist.

    Not in traditional fixed-precision arithmetic, anyway -- at least after it fully embraced IEEE 754.

    With higher-precision arithmetic, on the other hand, the traditional C paradigms may not be so suitable.

    If you want something else, you know where to find it. There is
    no value in eroding the differences in all languages until only
    one universal language emerges. Vivat differentia.
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence D'Oliveiro@ldo@nz.invalid to comp.lang.c on Sun Jul 20 00:16:56 2025
    From Newsgroup: comp.lang.c

    On Wed, 16 Jul 2025 03:55:14 +0100, Richard Heathfield wrote:

    On 15/07/2025 20:41, Lawrence D'Oliveiro wrote:

    On Fri, 27 Jun 2025 02:40:58 +0100, Richard Heathfield wrote:

    On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:

    [...]if C is going to become more suitable for such high-
    precision calculations, it might need to become more Python-like.

    C is not in search of a reason to exist.

    Not in traditional fixed-precision arithmetic, anyway -- at least
    after it fully embraced IEEE 754.

    With higher-precision arithmetic, on the other hand, the
    traditional C paradigms may not be so suitable.

    If you want something else, you know where to find it. There is no
    value in eroding the differences in all languages until only one
    universal language emerges. Vivat differentia.

    You sound as though you don’t want languages copying ideas from each
    other.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.lang.c on Sun Jul 20 07:58:53 2025
    From Newsgroup: comp.lang.c

    On 20/07/2025 01:16, Lawrence D'Oliveiro wrote:
    On Wed, 16 Jul 2025 03:55:14 +0100, Richard Heathfield wrote:

    On 15/07/2025 20:41, Lawrence D'Oliveiro wrote:

    On Fri, 27 Jun 2025 02:40:58 +0100, Richard Heathfield wrote:

    On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:

    [...]if C is going to become more suitable for such high-
    precision calculations, it might need to become more Python-like.

    C is not in search of a reason to exist.

    Not in traditional fixed-precision arithmetic, anyway -- at least
    after it fully embraced IEEE 754.

    With higher-precision arithmetic, on the other hand, the
    traditional C paradigms may not be so suitable.

    If you want something else, you know where to find it. There is no
    value in eroding the differences in all languages until only one
    universal language emerges. Vivat differentia.

    You sound as though you don’t want languages copying ideas from each
    other.

    Good, because I don't.

    There's nothing wrong with new languages pinching ideas from old
    languages - that's creativity and progress, especially when those
    ideas are combined in new and interesting ways, and you can keep
    on adding those ideas right up until your second reference
    implementation goes public.

    But going the other way turns a programming language into a
    constantly moving target that it's impossible for more than a
    handful of people to master - the handful in question being those
    who decide what's in and what's out. This is bad for programmers'
    expertise and bad for the industry.
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Sun Jul 20 11:28:54 2025
    From Newsgroup: comp.lang.c

    On 20.07.2025 08:58, Richard Heathfield wrote:
    On 20/07/2025 01:16, Lawrence D'Oliveiro wrote:
    On Wed, 16 Jul 2025 03:55:14 +0100, Richard Heathfield wrote:
    [...]

    You sound as though you don’t want languages copying ideas from each
    other.

    Hmm.. - this is an interesting thought. In an instant reflex I'd agree
    with the advantage of picking "good" ideas from other languages. Upon reconsideration I have some doubts though; not only because some ideas
    may fit in a language but others not really. To me many language give
    the impression to have been patched-up instead of being well designed
    from scratch. Either evolving by featuritis of "good ideas" or needing
    changes to address inherent shortcomings of the basic language design.

    [...]
    There's nothing wrong with new languages pinching ideas from old
    languages - that's creativity and progress, especially when those ideas
    are combined in new and interesting ways, and you can keep on adding
    those ideas right up until your second reference implementation goes
    public.

    But going the other way turns a programming language into a constantly
    moving target that it's impossible for more than a handful of people to master - the handful in question being those who decide what's in and
    what's out. This is bad for programmers' expertise and bad for the
    industry.

    Incompatibilities or change of semantics between versions would be bad!
    For coherently designed and consistently enhanced languages that might
    be less a problem. Having a well designed "Common Language Base" would
    not impose too much effort to master [coherently matching] extensions.
    Of course here we are speaking (only) about "C", specifically, so the
    basic language preconditions are set (and its decades long evolution
    path clearly visible).

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence D'Oliveiro@ldo@nz.invalid to comp.lang.c on Tue Jul 29 00:56:14 2025
    From Newsgroup: comp.lang.c

    On Sun, 29 Jun 2025 09:23:01 -0400, James Kuyper wrote:

    It's somewhat more complicated than that. IEEE-784 is a
    radix-independent standard, otherwise equivalent to IEEE-754.

    Did you mean IEEE-854?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dave_thompson_2@dave_thompson_2@comcast.net to comp.lang.c on Tue Jul 29 10:49:19 2025
    From Newsgroup: comp.lang.c

    (Sorry for delay, this got stuck)

    On Mon, 14 Apr 2025 15:56:56 -0700, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Huge numbers of systems already use the perfectly reasonable POSIX
    epoch, 1970-01-01 00:00:00 UTC. I can think of no good reason to
    standardize anything else.

    NNTP uses unsigned-32bit seconds from 1900-01-01 'UTC' (really a blend
    of GMT then TAI aligned like UTC but not actually representing the
    leapseconds; yes that's a bodge). It will wrap in 2036, about 2 years
    before progams and data (still) using signed-32bit seconds from 1970
    more famously will.

    Astronomers count Julian Day Numbers from 4713 BC proleptic Julian.
    This was chosen to ensure that all astronomical observations or events
    in recorded history have positive dates.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From James Kuyper@jameskuyper@alumni.caltech.edu to comp.lang.c on Tue Jul 29 21:13:27 2025
    From Newsgroup: comp.lang.c

    On 2025-07-28 20:56, Lawrence D'Oliveiro wrote:
    On Sun, 29 Jun 2025 09:23:01 -0400, James Kuyper wrote:

    It's somewhat more complicated than that. IEEE-784 is a
    radix-independent standard, otherwise equivalent to IEEE-754.

    Did you mean IEEE-854?

    Yes - Sorry for the confusion.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From James Kuyper@jameskuyper@alumni.caltech.edu to comp.lang.c on Tue Jul 29 21:18:48 2025
    From Newsgroup: comp.lang.c

    On 2025-07-29 10:49, dave_thompson_2@comcast.net wrote:
    ...
    Astronomers count Julian Day Numbers from 4713 BC proleptic Julian.
    This was chosen to ensure that all astronomical observations or events
    in recorded history have positive dates.

    While that is one benefit of using that date, it was in fact chosen
    because several different astronomical cycles associated with common
    ancient calendar systems all align together at that time. That makes
    conversion between Julian Days and any of those calendars simpler.
    --- Synchronet 3.21a-Linux NewsLink 1.2