• about some myth on programing

    From fir@profesor.fir@gmail.com to comp.lang.c on Mon Aug 11 01:49:47 2025
    From Newsgroup: comp.lang.c

    some thought -often programming is showed to be dealing with 0 nad ones,
    but it strike me that in fact this is untrue, come better could say that data/storage is from 0,1 but prgramming on its front in fact is dealing
    with opcodes (?) (numerical commands) and adresses - yet also all this microcycles... co in fact coding should be presented more like stream
    of numbers: 90 3349, 87, 6787 378236, 736 23872387, not bits
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.lang.c on Mon Aug 11 10:29:54 2025
    From Newsgroup: comp.lang.c

    On 2025-08-10 23:49:47 +0000, fir said:

    some thought -often programming is showed to be dealing with 0 nad
    ones, but it strike me that in fact this is untrue, come better could
    say that data/storage is from 0,1 but prgramming on its front in fact
    is dealing with opcodes (?) (numerical commands) and adresses - yet
    also all this microcycles... co in fact coding should be presented
    more like stream of numbers: 90 3349, 87, 6787 378236, 736 23872387,
    not bits

    An early term for 'assembler' was "automatic programmer". However, later consensus is that "coding" is to write a text that can be given to a
    computer for compilation or execution and "programming" includes coding
    and whatever the person doing the coding does before the coding. There
    may be some variation at the latter point if the programmer does something
    that could be given to someone else for programming.
    --
    Mikko

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From James Kuyper@jameskuyper@alumni.caltech.edu to comp.lang.c on Mon Aug 11 09:26:15 2025
    From Newsgroup: comp.lang.c

    On 2025-08-10 23:49:47 +0000, fir said:

    some thought -often programming is showed to be dealing with 0 nad
    ones, but it strike me that in fact this is untrue, come better could
    say that data/storage is from 0,1 but prgramming on its front in fact
    is dealing with opcodes (?) (numerical commands) and adresses - yet
    also all this microcycles... co in fact coding should be presented
    more like stream of numbers: 90 3349, 87, 6787 378236, 736 23872387,
    not bits

    My first computer job involved working for a company that operated
    computer systems specialized for hospitals. This was in the mid 1980's,
    and the founders of the company had previously worked for NASA, where
    there was typically very little room for a big computer systems. The
    system handled prescriptions, doctor-prescribed meal menus, and payroll
    for the entire hospital using only 256 KB of memory, two magnetic tape
    drives, and a dozen or so hard-disk drives about 16" in diameter.

    If the system ever needed to be rebooted, there was a documented
    procedure that had to be followed. There was a set of 8 switches on the computer that were used to set the bits of one byte. There was another
    switch that caused the address of the byte being set to increment by
    one. We were supposed to use those switches to enter a program listed in
    the procedures manual that was 256 bytes long, and then press another
    switch to start that program running. I'm very glad that I didn't stay
    with that company long enough to be the person responsible for doing that.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From candycanearter07@candycanearter07@candycanearter07.nomail.afraid to comp.lang.c on Mon Aug 11 19:30:03 2025
    From Newsgroup: comp.lang.c

    fir <profesor.fir@gmail.com> wrote at 23:49 this Sunday (GMT):
    some thought -often programming is showed to be dealing with 0 nad ones,
    but it strike me that in fact this is untrue, come better could say that data/storage is from 0,1 but prgramming on its front in fact is dealing
    with opcodes (?) (numerical commands) and adresses - yet also all this microcycles... co in fact coding should be presented more like stream
    of numbers: 90 3349, 87, 6787 378236, 736 23872387, not bits


    Bits are used to represent larger numbers, through binary. Generally,
    bits are grouped into a byte (8 bits) that can represent 0-255, and you
    can add more bytes to represnt exponentially larger numbers, with the
    current maximum being 64 bits/8 bytes per number and a maximum unsigned
    integer value of 9e18.

    000 - 0
    001 - 1
    010 - 2
    011 - 3
    100 - 4
    101 - 5
    110 - 6
    111 - 7
    1000 - 8
    ... and so on.
    --
    user <candycane> is generated from /dev/urandom
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Dan Purgert@dan@djph.net to comp.lang.c on Mon Aug 11 20:36:20 2025
    From Newsgroup: comp.lang.c

    On 2025-08-10, fir wrote:
    some thought -often programming is showed to be dealing with 0 nad ones,
    but it strike me that in fact this is untrue, come better could say that data/storage is from 0,1 but prgramming on its front in fact is dealing
    with opcodes (?) (numerical commands) and adresses - yet also all this microcycles... co in fact coding should be presented more like stream
    of numbers: 90 3349, 87, 6787 378236, 736 23872387, not bits

    Opcodes are literally the series of transistors that are switched "on" (typically represented by '1') or "off" (typically represented by '0')
    in the CPU's instruction register so that you get the operation you want
    out of the processor (e.g. addition or subtraction in the ALU, or
    jumping the program counter to some other address, or whatever other
    operation the CPU supports).

    Likewise addresses are the same thing -- the physical manifestation of {8,16,32,64} transistors that need to either be "on" or "off" in order
    to read a specific amount of data. Perhaps the "memory address" is that
    of a single bit, or a byte, or some multiple thereto.

    Not really sure what a "microcycle" is. Are you using it as terminology
    from when they counted Hertz as "cycles per second"? If so, yes,
    transistors are fast.

    The result of "coding" (or "programming" or whatever you want to call
    it) is a stream of numbers, like this (intel-format) hexfile that will
    blink a LED on an AVR microcontroller (although I don't know offhand
    what specific chip it was compiled for).

    :020000020000FC <-- start of flash memory, $FC is a checksum
    :100000000FEF04B901E005B926E033E14EEA4A9565 < bytecode($65 cksum)
    :10001000F1F73A95E1F72A95D1F705B1009505B9C1 < bytecode($C1 cksum)
    :02002000F3CF1C <-- end of bytecode $1C is checksum
    :00000001FF <-- EOF


    (not that this makes a lot of sense -- I'd have to read up that AVR is little-endian or big-endian; not that it really matters when writing C
    for it though :) )
    --
    |_|O|_|
    |_|_|O| Github: https://github.com/dpurgert
    |O|O|O| PGP: DDAB 23FB 19FA 7D85 1CC1 E067 6D65 70E5 4CE7 2860
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From fir@profesor.fir@gmail.com to comp.lang.c on Tue Aug 12 13:24:25 2025
    From Newsgroup: comp.lang.c

    candycanearter07 pisze:
    fir <profesor.fir@gmail.com> wrote at 23:49 this Sunday (GMT):
    some thought -often programming is showed to be dealing with 0 nad ones,
    but it strike me that in fact this is untrue, come better could say that
    data/storage is from 0,1 but prgramming on its front in fact is dealing
    with opcodes (?) (numerical commands) and adresses - yet also all this
    microcycles... co in fact coding should be presented more like stream
    of numbers: 90 3349, 87, 6787 378236, 736 23872387, not bits


    Bits are used to represent larger numbers, through binary. Generally,
    bits are grouped into a byte (8 bits) that can represent 0-255, and you
    can add more bytes to represnt exponentially larger numbers, with the
    current maximum being 64 bits/8 bytes per number and a maximum unsigned integer value of 9e18.

    000 - 0
    001 - 1
    010 - 2
    011 - 3
    100 - 4
    101 - 5
    110 - 6
    111 - 7
    1000 - 8
    ... and so on.

    bat saiong thet coding is dealing witg "0" "1" is misledaing as bits are
    used to store data
    when its get to code mostly coder is not interested in binary
    representation od opcodes unles he doing specific things - so i wouldnt
    even sait that coder is deapling in bytes - he is dealin on gropund
    level with stream of commands (which might been represented by
    numbers/integer values (to get to some compromise to represent how code
    looks on low level ..its more like dealing with natural numbers (finite values)


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bonita Montero@Bonita.Montero@gmail.com to comp.lang.c on Tue Aug 12 17:12:59 2025
    From Newsgroup: comp.lang.c

    Am 11.08.2025 um 01:49 schrieb fir:

    some thought -often programming is showed to be dealing with 0 nad ones,
    but it strike me that in fact this is untrue, come better could say that data/storage is from 0,1 but prgramming on its front in fact is dealing
    with opcodes (?) (numerical commands) and adresses - yet also all this microcycles...  co in fact coding should be presented more like stream
    of numbers: 90 3349, 87, 6787 378236, 736 23872387, not bits

    Go away from this detail-scope, you don't need it 90% of the time you
    program.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Sun Aug 17 17:15:03 2025
    From Newsgroup: comp.lang.c

    On 11.08.2025 01:49, fir wrote:
    some thought -often programming is showed to be dealing with 0 nad ones,
    but it strike me that in fact this is untrue, come better could say that data/storage is from 0,1 but prgramming on its front in fact is dealing
    with opcodes (?) (numerical commands) and adresses - yet also all this microcycles... co in fact coding should be presented more like stream
    of numbers: 90 3349, 87, 6787 378236, 736 23872387, not bits

    Your "numbers" here are actually unsigned integer numbers, so there's effectively not much difference to bits but the number base (2 vs. 10).

    Programming doesn't even need to be defined in integral quantities, or
    need to assume a von Neumann architecture; it could be real numbers in
    analogue computers, where the program is defined by connecting analog
    adders, multipliers, integrators and configure them by potentiometers.

    Rarely anyone seems to recall analogue or hybrid computers. Let's wait
    for the quantum computers for yet another principle of computing...

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2