• Algol 68 / Genie - opinions on local procedures?

    From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Mon Aug 18 04:52:24 2025
    From Newsgroup: comp.lang.misc

    In a library source for rational numbers I'm using a GCD function
    to normalize the rational numbers. This function is used from other
    rational operations regularly since all numbers are always stored
    in their normalized form.

    I defined the 'PROC rat_gcd' in global space but with Algol 68 it
    could also be defined locally (to not pollute the global namespace)
    like

    ELSE # normalize #
    PROC rat_gcd = ... ;

    INT nom = ABS a, den = ABS b;
    INT sign = SIGN a * SIGN b;
    INT q = rat_gcd (nom, den);
    ( sign * nom OVER q, den OVER q )
    FI

    though performance measurements showed some noticeable degradation
    with a local function definition as depicted.

    I'd prefer it to be local but since it's ubiquitously used in that
    library the performance degradation (about 15% on avg) annoys me.

    Opinions on that?

    Janis
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Mon Aug 18 16:54:53 2025
    From Newsgroup: comp.lang.misc

    On 18/08/2025 03:52, Janis Papanagnou wrote:
    I defined the 'PROC rat_gcd' in global space but with Algol 68 it
    could also be defined locally (to not pollute the global namespace)
    like
    [... snip ...]
    though performance measurements showed some noticeable degradation
    with a local function definition as depicted.

    I can't /quite/ reproduce your problem. If I run just the
    interpreter ["a68g myprog.a68g"] then on my machine the timings are
    identical. If I optimise ["a68g -O3 myprog.a68g"], then /first time/
    through, I get a noticeable degradation [about 10% on my machine],
    but the timings converge if I run them repeatedly. YMMV. I suspect
    it's to do with storage management, and later runs are able to re-use
    heap storage that had to be grabbed first time. But that could be
    completely up the pole. Marcel would probably know.

    If you see the same, then I suggest you don't run programs
    for a first time. [:-)]

    I'd prefer it to be local but since it's ubiquitously used in that
    library the performance degradation (about 15% on avg) annoys me.
    Opinions on that?

    Personally, I'd always go for the version that looks nicer
    [ie, in keeping with your own inclinations, with the spirit of A68,
    with the One True (A68) indentation policy, and so on]. If you're
    worried about 15%, that will be more than compensated for by your
    next computer! If you're Really Worried about 15%, then I fear it's
    back to C [or whatever]; but that will cost you more than 15% in
    development time.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/West
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Mon Aug 18 18:30:54 2025
    From Newsgroup: comp.lang.misc

    On 18.08.2025 17:54, Andy Walker wrote:
    On 18/08/2025 03:52, Janis Papanagnou wrote:
    I defined the 'PROC rat_gcd' in global space but with Algol 68 it
    could also be defined locally (to not pollute the global namespace)
    like
    [... snip ...]
    though performance measurements showed some noticeable degradation
    with a local function definition as depicted.

    I can't /quite/ reproduce your problem. If I run just the
    interpreter ["a68g myprog.a68g"] then on my machine the timings are identical. If I optimise ["a68g -O3 myprog.a68g"], then /first time/ through, I get a noticeable degradation [about 10% on my machine],
    but the timings converge if I run them repeatedly. YMMV.

    Actually, with more tests, the variance got even greater; from 10%
    to 45% degradation. The variances, though, did not converge [in my environment].

    I suspect
    it's to do with storage management, and later runs are able to re-use
    heap storage that had to be grabbed first time.

    I also suspected some storage management effect; maybe that the GC
    got active at various stages. (But the code did not use anything
    that would require GC; to be honest, I'm puzzled.)

    But that could be
    completely up the pole. Marcel would probably know.

    If you see the same, then I suggest you don't run programs
    for a first time. [:-)]

    :-)


    I'd prefer it to be local but since it's ubiquitously used in that
    library the performance degradation (about 15% on avg) annoys me.
    Opinions on that?

    Personally, I'd always go for the version that looks nicer
    [ie, in keeping with your own inclinations, with the spirit of A68,
    with the One True (A68) indentation policy, and so on].

    That's what I'm tending towards. I think I'll put the GCD function
    in local scope to keep it away from the interface.

    If you're
    worried about 15%, that will be more than compensated for by your
    next computer!

    Actually I'm very conservative concerning computers; mine is 15+
    years old, and although I "recently" thought about getting an update
    here it's not my priority. ;-)

    If you're Really Worried about 15%, then I fear it's

    Not really. It's not the 10-45%, it's more the feeling that a library
    function should not only conform to the spirit of good software design
    but also be efficiently implemented (also in Algol 68).

    The "problem" (my "problem") here is that the effect should not appear
    in the first place since static scoping should not cost performance; I
    suppose it's an effect of Genie being effectively an interpreter here.

    But my Algol 68 programming is anyway just recreational, for fun, so
    I'll go with the cleaner (slower) implementation.

    back to C [or whatever]; but that will cost you more than 15% in
    development time.

    Uh-oh! - But no, that's not my intention here. ;-)

    Thanks!

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Tue Aug 19 00:45:00 2025
    From Newsgroup: comp.lang.misc

    On 18/08/2025 17:30, Janis Papanagnou wrote:
    Actually, with more tests, the variance got even greater; from 10%
    to 45% degradation. The variances, though, did not converge [in my environment].

    Ah. Then I backtrack from my previous explanation to an
    alternative, that your 15yo computer has insufficient cache, so
    every new run chews up more and more real storage. Or something.
    You may get some improvement by running "sweep heap" or similar
    from time to time, or using pragmats to allocate more storage.

    I also suspected some storage management effect; maybe that the GC
    got active at various stages. (But the code did not use anything
    that would require GC; to be honest, I'm puzzled.)

    ISTR that A68G uses heap storage rather more than you might
    expect. I think Marcel's documentation has more info.

    [...]
    If you're
    worried about 15%, that will be more than compensated for by your
    next computer!
    Actually I'm very conservative concerning computers; mine is 15+
    years old, and although I "recently" thought about getting an update
    here it's not my priority. ;-)

    Ah. I thought I was bad, keeping computers 10 years or so!
    I got a new one a couple of years back, and the difference in speed
    and storage was just ridiculous.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Soler
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Tue Aug 19 02:44:58 2025
    From Newsgroup: comp.lang.misc

    On 19.08.2025 01:45, Andy Walker wrote:
    [...]
    If you're
    worried about 15%, that will be more than compensated for by your
    next computer!
    Actually I'm very conservative concerning computers; mine is 15+
    years old, and although I "recently" thought about getting an update
    here it's not my priority. ;-)

    Ah. I thought I was bad, keeping computers 10 years or so!
    I got a new one a couple of years back, and the difference in speed
    and storage was just ridiculous.

    Well, used software tools (and their updates) required me to at least
    upgrade memory! (That's actually one point that annoys me in "modern"
    software development; rarely anyone seems to care economizing resource requirements.) But all the rest, especially the things that influence performance (CPU [speed, cores], graphic card, HDs/Cache, whatever) is comparably old stuff in my computer; but it works for me.[*]

    And, by the way, thanks for your suggestions and helpful information
    on my questions in all my recent Algol posts! It's also very pleasant
    being able to substantially exchange ideas on this (IMO) interesting
    legacy topic.

    Janis

    [*] If anything I'd probably only need an ASCII accelerating graphics
    card; see https://www.bbspot.com/News/2003/02/ati_ascii.html ;-)

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Wed Aug 20 00:47:31 2025
    From Newsgroup: comp.lang.misc

    On 19/08/2025 01:44, Janis Papanagnou wrote:
    [...] (That's actually one point that annoys me in "modern"
    software development; rarely anyone seems to care economizing resource requirements.) [...]

    Yeah. From time to time I wonder what would happen if we ran
    7th Edition Unix on a modern computer. Sadly, I have to admit that
    I too am rather careless of resources; if you have terabytes of SSD,
    it seems to be a waste of time worrying about a few megabytes.

    And, by the way, thanks for your suggestions and helpful information
    on my questions in all my recent Algol posts! It's also very pleasant
    being able to substantially exchange ideas on this (IMO) interesting
    legacy topic.

    You're very welcome, and I reciprocate your pleasure.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Peerson
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Wed Aug 20 00:43:22 2025
    From Newsgroup: comp.lang.misc

    On Wed, 20 Aug 2025 00:47:31 +0100, Andy Walker wrote:

    From time to time I wonder what would happen if we ran
    7th Edition Unix on a modern computer.

    The Linux kernel source is currently over 40 million lines, and I
    understand the vast majority of that is device drivers.

    If you were to run an old OS on new hardware, that would need drivers for
    that new hardware, too.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Wed Aug 20 23:58:58 2025
    From Newsgroup: comp.lang.misc

    On 20/08/2025 01:43, Lawrence D’Oliveiro wrote:
    [I wrote:]
    From time to time I wonder what would happen if we ran
    7th Edition Unix on a modern computer.
    The Linux kernel source is currently over 40 million lines, and I
    understand the vast majority of that is device drivers.

    You seem to be making Janis's point, but that doesn't seem to
    be your intention?

    If you were to run an old OS on new hardware, that would need drivers for that new hardware, too.

    Yes, but what is so special about a modern disc drive, monitor, keyboard, mouse, ... that it needs "the vast majority" of 40M lines more
    than its equivalent for a PDP-11? Does this not again make Janis's point?

    Granted that the advent of 32- and 64-bit integers and addresses
    makes some programming much easier, and that we can no longer expect
    browsers and other major tools to fit into 64+64K bytes, is the actual
    bloat in any way justified? It's not just kernels and user software --
    it's also the documentation. In V7, "man cc" generates just under two
    pages of output; on my current computer, it generates over 27000 lines,
    call it 450 pages, and is thereby effectively unprintable and unreadable,
    so it is largely wasted.

    For V7, the entire documentation fits comfortably into two box
    files, and the entire source code is a modest pile of lineprinter output.
    Most of the commands on my current computer are undocumented and unused,
    and I have no idea at all what they do.

    Yes, I know how that "just happens", and I'm observing rather
    than complaining [I'd rather write programs, browse and send/read e-mails
    on my current computer than on the PDP-11]. But it does all give food for thought.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Peerson
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Thu Aug 21 02:59:32 2025
    From Newsgroup: comp.lang.misc

    On Wed, 20 Aug 2025 23:58:58 +0100, Andy Walker wrote:

    On 20/08/2025 01:43, Lawrence D’Oliveiro wrote:

    If you were to run an old OS on new hardware, that would need
    drivers for that new hardware, too.

    Yes, but what is so special about a modern disc drive, monitor,
    keyboard, mouse, ... that it needs "the vast majority" of 40M lines
    more than its equivalent for a PDP-11?

    Keyboard and mouse -- USB.

    Disk drive -- that might connect via SCSI or SATA. Either one requires
    common SCSI-handling code. Plus you want a filesystem, don’t you?
    Preferably a modern one with better performance and reliability than
    Bell Labs was able to offer, back in the day. That requires caching
    and journalling support. Plus modern drives have monitoring built-in,
    which you will want to access. And you want RAID, which didn’t exist
    back then?

    Monitor -- video in the Linux kernel goes through the DRM (“Direct
    Rendering Manager”) layer. Unix didn’t have GUIs back then, but you
    will likely want them now. The PDP-11 back then accessed its console
    (and other terminals) through serial ports. You might still want
    drivers for those, too.

    Both video and disk handling in turn would be built on the common
    PCI-handling code.

    Remember there is also hot-plugging support for these devices, which
    was unheard of back in the day.

    The CPU+support chipset itself will need some drivers, beyond what was conceived back then: for example, control of the various levels of
    caching, power saving, sensor monitoring, and of course memory
    management needs to be much more sophisticated nowadays.

    And what about networking? Would you really want to run a machine in a
    modern environment without networking?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Thu Aug 21 21:02:55 2025
    From Newsgroup: comp.lang.misc

    On 19/08/2025 01:44, Janis Papanagnou wrote:
    [...] (That's actually one point that annoys me in "modern"
    software development; rarely anyone seems to care economizing resource requirements.) [...]

    On 21.08.2025 00:58, Andy Walker wrote:
    On 20/08/2025 01:43, Lawrence D’Oliveiro wrote:
    The Linux kernel source is currently over 40 million lines, and I
    understand the vast majority of that is device drivers.

    You seem to be making Janis's point, but that doesn't seem to
    be your intention?

    If you were to run an old OS on new hardware, that would need drivers for
    that new hardware, too.

    Yes, but what is so special about a modern disc drive, monitor,
    keyboard, mouse, ... that it needs "the vast majority" of 40M lines more
    than its equivalent for a PDP-11? [...]

    This was actually what I was also thinking when I read Lawrence's
    statement. (And even given his later more thorough list of modern functionalities this still doesn't quite explain the need for *so*
    much resource demands, IMO. I mean; didn't they flew to the moon
    in capsules with computers of kilobytes of memory. Yes, nowadays
    we have more features to support. But in previous days they *had*
    to economize; they had to "squeeze" algorithms to fit into 1 kiB
    of memory. Nowadays no one cares. And the computers running that
    software are an "externality"; there's no incentive, it seems,
    to write the software sophistically in an ergonomic way.)

    But that was (in my initial complaint; see above) anyway just one
    aspect of many.

    You already mentioned documentation. There we not only see this
    extreme huge and often badly structure unclear texts but also the
    information to text-size ratio is often in an extreme imbalance;
    to mention a few keywords, DOC, HTML, XML, JSON - and where the
    problem is not (not only) that one or the other of the formats is
    absolutely huge, but also that it's relatively huge compared to
    an equally or better fitting use of a more primitive format.

    Related to that; some HTML pages you load that contain just text
    payloads of few kiB but that has not only the HTML overhead but
    also loads Mebi (or Gibi?) bytes through dozens of JS libraries;
    and they're not even used! And yet I haven't mentioned pages that
    add more storage and performance demands due to advertisement
    logic (with more delays, and "of course" not considering data
    privacy); but that's of course intentionally (it's your choice).

    Economy is also related to GUI ergonomy; in configurability and
    usability. You can configure all sorts of GUI properties like GUI-schemes/appearance, you can adjust buttons left or right,
    but you cannot get a button with a necessary function, or one
    function in an easy accessible way. GUI's are overloaded with
    all sorts of trash which inevitably leads to an uneconomic use,
    and necessary features are unsupported or cannot be configured.
    (But providing such [only] fancy features contributes also to
    the code size.)

    Then there's the unnecessary dependencies. Just recently there
    was a discussion about (I think) the ffmpeg tool; it was shown
    that it includes hundreds of external libraries! Yet worse, many
    of them not serving it's main task (video processing/converting)
    but things like LDAP, and *tons* of libraries concerning Samba;
    the latter is also a problem of bad software organization given
    that there's so many libraries to be added for SMB "support"
    (whether that should be a part of a video converter or not).

    But also the performance or the systems/application design. If
    you start, e.g., a picture viewer and you have to wait a long
    time because the software designer thought it to be a good idea
    to present the directory tree in a separate part of the window,
    and to achieve that the program needs to recursively parse a
    huge subdirectory structure, and until you finally see that
    single picture that you wanted to see - and whose file name
    you already provided as argument! - half a minute passed.

    Or use of bad algorithms. Like a graphics processing software
    that doesn't terminate when trying to 90° rotate a large image
    because it does try to do the rotation unsophisticatedly with
    a copy of the huge memory and with bit-wise operations instead
    using fast and lossless in-place algorithms (that are commonly
    known already since half a century).

    Etc. etc. - Above just off the top of my head; there's surely
    much more to say about economy and software development.

    And an important consequence is that bad design and bloat will
    make systems usually also less stable and unreliable. And it's
    often hard (or even impossible) to fix such monstrosities.

    <end of rant>

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Sat Aug 23 00:42:01 2025
    From Newsgroup: comp.lang.misc

    On 21/08/2025 03:59, Lawrence D’Oliveiro wrote:
    On Wed, 20 Aug 2025 23:58:58 +0100, Andy Walker wrote:
    On 20/08/2025 01:43, Lawrence D’Oliveiro wrote:
    If you were to run an old OS on new hardware, that would need
    drivers for that new hardware, too.
    Yes, but what is so special about a modern disc drive, monitor,
    keyboard, mouse, ... that it needs "the vast majority" of 40M lines
    more than its equivalent for a PDP-11?
    Keyboard and mouse -- USB. [...]

    You've given us a list of 20-odd features of modern systems
    that have been developed since 7th Edition Unix, and could no doubt
    think of another 20. What you didn't attempt was to explain why all
    these nice things need to occupy 40M lines of code. That's, give or
    take, 600k pages of code, call it 2000 books. That's, on your figures,
    just the kernel source; specifications [assuming there are such!] and documentation no doubt double that, and it's already more than normal
    people can read and understand. There is similar bloat is the commands
    and in the manual entries. It's out of control, witness the updates
    that come in every few days. It's fatally easy to say of "sh" or "cc"
    or "firefox" or ... "Wouldn't it be nice if it did X?", and fatally
    hard to say "It shouldn't really be doing X.", as there's always the possibility of someone somewhere who might perhaps be using it.

    See also Janis's nearby article.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Kinross
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Sat Aug 23 02:29:54 2025
    From Newsgroup: comp.lang.misc

    On 23.08.2025 01:42, Andy Walker wrote:
    On 21/08/2025 03:59, Lawrence D’Oliveiro wrote:
    [...]

    You've given us a list of 20-odd features of modern systems
    that have been developed since 7th Edition Unix, and could no doubt
    think of another 20. What you didn't attempt was to explain why all
    these nice things need to occupy 40M lines of code. That's, give or
    take, 600k pages of code, call it 2000 books. That's, on your figures,
    just the kernel source; [...]

    That was a point I also found to be a very disturbing statement;
    I recall the kernel was designed to be small, the period of stay
    in kernel routines should generally also be short! - And now we
    have millions of lines that are either just idle or used against
    the Unix's design and operating principles?

    Meanwhile - I think probably since AIX? - we don't any more need
    to compile the drivers into the kernel (as formerly with SUN OS,
    for example). But does that really mean that all the drivers now
    bloat the kernel [as external modules] as well? - Sounds horrible.

    But I'm no expert on this topic, so interested to be enlightened
    if the situation is really as bad as Lawrence sketched it.

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Sat Aug 23 02:36:45 2025
    From Newsgroup: comp.lang.misc

    On Sat, 23 Aug 2025 00:42:01 +0100, Andy Walker wrote:

    What you didn't attempt was to explain why all these nice things
    need to occupy 40M lines of code.

    Go look at the code itself.
    --- Synchronet 3.21a-Linux NewsLink 1.2