• Mighty COBOL is lazy again...

    From Kellie Fitton@KELLIEFITTON@yahoo.com to comp.lang.cobol on Mon Jun 4 09:47:33 2018
    From Newsgroup: comp.lang.cobol

    Hello everyone,

    Most modern programming languages like C sharp or Java are using
    a programming method called Lazy Initialization. This method can
    help reduce CPU consumption, reduce program memory requirements,
    and improves program startup and system performance.

    This tactic delays the creation, initialization and usability of
    variables, data structures and program functions (logic) until
    the first time it is needed, then they become usable/accessible
    on-the-fly during the Runtime session [Dynamically].

    COBOL pioneered lazy initialization. This technique is knowing as
    modularized subprograms. Case in point, I am currently working on
    a large system that I divided into two main programs: The GUI
    front-end and the business logic back-end, plus twelve large
    modules that will function as independent callable subprograms,
    and will be called Dynamically.

    I need your insightful feedback with the following questions:

    1). What is the best optimized method to reduce program memory
    requirements?

    2). What is the best optimized method to obviate performance
    degradation when calling subprograms Dynamically?


    Your kind feedback is appreciated.






    COBOL - the elephant that can stand on its trunk...

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From docdwarf@docdwarf@panix.com () to comp.lang.cobol on Mon Jun 4 21:30:13 2018
    From Newsgroup: comp.lang.cobol

    In article <ab26e399-c4c6-445f-ac2b-e4738daa650e@googlegroups.com>,
    Kellie Fitton <KELLIEFITTON@yahoo.com> wrote:

    [snip]

    1). What is the best optimized method to reduce program memory
    requirements?

    Keep the program small and limited in function. Small programs use less
    core.


    2). What is the best optimized method to obviate performance
    degradation when calling subprograms Dynamically?

    Keep the programs large and varied in function. Large programs do more
    when they are called into core.

    DD
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard@riplin@azonic.co.nz to comp.lang.cobol on Mon Jun 4 16:09:57 2018
    From Newsgroup: comp.lang.cobol

    On Tuesday, June 5, 2018 at 4:47:34 AM UTC+12, Kellie Fitton wrote:
    Hello everyone,

    Most modern programming languages like C sharp or Java are using
    a programming method called Lazy Initialization. This method can
    help reduce CPU consumption, reduce program memory requirements,
    and improves program startup and system performance.

    This tactic delays the creation, initialization and usability of
    variables, data structures and program functions (logic) until
    the first time it is needed, then they become usable/accessible
    on-the-fly during the Runtime session [Dynamically].

    COBOL pioneered lazy initialization. This technique is knowing as
    modularized subprograms. Case in point, I am currently working on
    a large system that I divided into two main programs: The GUI
    front-end and the business logic back-end, plus twelve large
    modules that will function as independent callable subprograms,
    and will be called Dynamically.

    I need your insightful feedback with the following questions:

    1). What is the best optimized method to reduce program memory
    requirements?

    2). What is the best optimized method to obviate performance
    degradation when calling subprograms Dynamically?

    Ahhh! That takes me back 40 years to when I developed in CIS COBOL for CP/M with 64KBytes of memory and for multiuser systems with 256KB. Even that was a step up from the earlier ICL 1900s and such with 16Kwords. I didn't think that anyone bothered with such things anymore, even I have a machine with 16GigaBytes - quarter of a _million_ times more memory. And then it has 'swap' as well.
    Before the Interprogram Communication Module was added in the '74 standard the usual way of getting large programs into memory was to use the overlay system. Optimization was done by careful selection of section priorities and adjusting the 'segment-limit'.
    With CALL and CANCEL it was possible to build large systems by having a small core module that called and canceled each application function as required. Certain modules, such as the file handler and ADIS (Accept/DISplay) were called into memory and initialized (by open a file and clearing the screen) to be permanently loaded and the the menu module was called. As items were selected by the user from the menu the menu was cancelled and the relevant program called. When that was exited it, in turn, was cancelled and the menu recalled (this could be overridden by a program setting 'next program' to some other, such as 'invoice process' setting 'invoice print' as next program before returning to the menu).
    With CIS COBOL limiting the number of current modules to 8, and there being no memory re-organization, if a called module needed to call a subsiduary module(s) then it was vital to ensure that cancels were done in reverse order of the calls. Otherwise the memory could become fragmented and large programs would fail to load.
    With Level II COBOL on multiuser MP/M, Concurrent-CP/M-86 and derivatives, and Xenix and such we had the shear luxury of shared code. The code segments of the run-time and of the modules were marked as 'shared' and required only one copy for the several different users. The data segments were, of course, separate for each user. This allowed for several users on a 1Megabyte system.
    But then I suspect that you will be developing for rather more resources than we had in those days, so sod it, let the system worry about how big your program are.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard@riplin@azonic.co.nz to comp.lang.cobol on Wed Jun 6 14:42:28 2018
    From Newsgroup: comp.lang.cobol

    On Tuesday, June 5, 2018 at 9:30:14 AM UTC+12, docd...@panix.com wrote:
    In article <ab26e399-c4c6-445f-ac2b-e4738daa650e@googlegroups.com>,
    Kellie Fitton <KELLIEFITTON@yahoo.com> wrote:

    [snip]

    1). What is the best optimized method to reduce program memory
    requirements?

    Keep the program small and limited in function. Small programs use less core.


    2). What is the best optimized method to obviate performance
    degradation when calling subprograms Dynamically?

    Keep the programs large and varied in function. Large programs do more
    when they are called into core.

    DD

    Is "core" still a thing? I haven't seen magnetic core memories since the very early 70s and thought that it had gone the way of 'Williams tubes' and 'mercury delay memory'. ;-)

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From docdwarf@docdwarf@panix.com () to comp.lang.cobol on Thu Jun 7 02:20:14 2018
    From Newsgroup: comp.lang.cobol

    In article <e3e974d0-fd77-4284-a21d-d67e8eba6832@googlegroups.com>,
    Richard <riplin@azonic.co.nz> wrote:
    On Tuesday, June 5, 2018 at 9:30:14 AM UTC+12, docd...@panix.com wrote:
    In article <ab26e399-c4c6-445f-ac2b-e4738daa650e@googlegroups.com>,
    Kellie Fitton <KELLIEFITTON@yahoo.com> wrote:

    [snip]

    1). What is the best optimized method to reduce program memory
    requirements?

    Keep the program small and limited in function. Small programs use less
    core.


    2). What is the best optimized method to obviate performance
    degradation when calling subprograms Dynamically?

    Keep the programs large and varied in function. Large programs do more
    when they are called into core.

    DD

    Is "core" still a thing? I haven't seen magnetic core memories since the
    very early 70s and thought that it had gone the way of 'Williams tubes'
    and 'mercury delay memory'. ;-)

    Personally I'm not sure... but I find that when I treat situations as
    though terms like 'core' still applicable... then Things Work Out Good.

    DD
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From pete dashwood@dashwood@enternet.co.nz to comp.lang.cobol on Thu Jun 7 16:48:47 2018
    From Newsgroup: comp.lang.cobol

    On 7/06/2018 2:20 PM, docdwarf@panix.com wrote:
    In article <e3e974d0-fd77-4284-a21d-d67e8eba6832@googlegroups.com>,
    Richard <riplin@azonic.co.nz> wrote:
    On Tuesday, June 5, 2018 at 9:30:14 AM UTC+12, docd...@panix.com wrote:
    In article <ab26e399-c4c6-445f-ac2b-e4738daa650e@googlegroups.com>,
    Kellie Fitton <KELLIEFITTON@yahoo.com> wrote:

    [snip]

    1). What is the best optimized method to reduce program memory
    requirements?

    Keep the program small and limited in function. Small programs use less >>> core.


    2). What is the best optimized method to obviate performance
    degradation when calling subprograms Dynamically?

    Keep the programs large and varied in function. Large programs do more
    when they are called into core.

    DD

    Is "core" still a thing? I haven't seen magnetic core memories since the
    very early 70s and thought that it had gone the way of 'Williams tubes'
    and 'mercury delay memory'. ;-)

    Personally I'm not sure... but I find that when I treat situations as
    though terms like 'core' still applicable... then Things Work Out Good.

    DD

    I remember seeing a film in the 1960s where they went through a factory
    in Korea (South... :-)) where Asian people wearing very large glasses
    were working all kinds of hours to string tiny ferrite donuts onto
    lattices of very fine wire to create "core memory".

    There is a perception that Asian people do not have good eyesight; if
    it's true, it may be because a previous generation ruined their eyes
    making core memory for NASA and IBM... :-)

    I still think of memory as "core" even though I perfectly well
    understand the technology used nowadays.

    It seems justifiable if you consider the larger meaning of the word
    "core": being at the heart of things, the essential nucleus, and so on...

    Pete.
    --
    I used to write COBOL; now I can do anything...
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From docdwarf@docdwarf@panix.com () to comp.lang.cobol on Thu Jun 7 13:56:39 2018
    From Newsgroup: comp.lang.cobol

    In article <fnrrpjFdoudU1@mid.individual.net>,
    pete dashwood <dashwood@enternet.co.nz> wrote:
    On 7/06/2018 2:20 PM, docdwarf@panix.com wrote:
    In article <e3e974d0-fd77-4284-a21d-d67e8eba6832@googlegroups.com>,
    Richard <riplin@azonic.co.nz> wrote:
    On Tuesday, June 5, 2018 at 9:30:14 AM UTC+12, docd...@panix.com wrote: >>>> In article <ab26e399-c4c6-445f-ac2b-e4738daa650e@googlegroups.com>,
    Kellie Fitton <KELLIEFITTON@yahoo.com> wrote:

    [snip]

    1). What is the best optimized method to reduce program memory
    requirements?

    Keep the program small and limited in function. Small programs use less >>>> core.


    2). What is the best optimized method to obviate performance
    degradation when calling subprograms Dynamically?

    Keep the programs large and varied in function. Large programs do more >>>> when they are called into core.

    Is "core" still a thing? I haven't seen magnetic core memories since the >>> very early 70s and thought that it had gone the way of 'Williams tubes'
    and 'mercury delay memory'. ;-)

    Personally I'm not sure... but I find that when I treat situations as
    though terms like 'core' still applicable... then Things Work Out Good.

    [snip]

    I still think of memory as "core" even though I perfectly well
    understand the technology used nowadays.

    It seems justifiable if you consider the larger meaning of the word
    "core": being at the heart of things, the essential nucleus, and so on...

    I use terms like 'core' to demonstrate 'the portion of any given computer architecture which is currently treating its contents as executable instructions'.

    Sure, a few nanoseconds and a swap-out or three later and the same chunk
    of core will contain data being processed by the instructions... but the model, overall, is the same: when you're processing statements and the
    data they have been given you're in core.

    DD
    --- Synchronet 3.20a-Linux NewsLink 1.114