• Re: Numeric fields Fujitsu/MF

    From robin.vowels@robin.vowels@gmail.com to comp.lang.cobol on Thu May 31 22:25:07 2018
    From Newsgroup: comp.lang.cobol

    On Monday, December 11, 2017 at 11:50:21 PM UTC+11, JM wrote:
    01 aa pic s999 value -123.


    In Fjitsu the field "aa" has the value in HEXA of "31 32 53" in MF the same field in HEXA has the following value "31 32 4C".
    There is some switch/command for fujitsu has the same behavior as MF

    what do you get when you print the value (with both compilers)?
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard@riplin@azonic.co.nz to comp.lang.cobol on Fri Jun 1 14:19:49 2018
    From Newsgroup: comp.lang.cobol

    On Tuesday, December 12, 2017 at 1:50:21 AM UTC+13, JM wrote:
    01 aa pic s999 value -123.


    In Fjitsu the field "aa" has the value in HEXA of "31 32 53" in MF the same field in HEXA has the following value "31 32 4C".
    There is some switch/command for fujitsu has the same behavior as MF
    The decimal representation for negative numbers is different between the two compilers, and and may be different again for other compilers. This can be overcome by using SIGN SEPARATE.
    I investigated this some years ago with a client that wanted to switch from MF to Fujitsu to avoid the huge run-time licence costs they had in 30 or 40 branch sites.
    It would have been easy enough to do a one time conversion of the data files, but data was being sent from other systems to the branches each day and this was using MF data format. It was necessary to add a conversion step (written in Python) to convert the signed decimal fields.
    There are tables in the appropriate manuals which specify the hex values for these fields, in particular the way the negative is indicated.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From robin.vowels@robin.vowels@gmail.com to comp.lang.cobol on Fri Jun 1 19:29:22 2018
    From Newsgroup: comp.lang.cobol

    On Saturday, June 2, 2018 at 7:19:51 AM UTC+10, Richard wrote:
    On Tuesday, December 12, 2017 at 1:50:21 AM UTC+13, JM wrote:
    01 aa pic s999 value -123.


    In Fjitsu the field "aa" has the value in HEXA of "31 32 53" in MF the same field in HEXA has the following value "31 32 4C".
    There is some switch/command for fujitsu has the same behavior as MF

    The decimal representation

    You mean, binary representation.

    for negative numbers is different between the two compilers,
    and and may be different again for other compilers.
    This can be overcome by using SIGN SEPARATE.

    Not if the internal representations are still different.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard@riplin@azonic.co.nz to comp.lang.cobol on Sat Jun 2 14:42:35 2018
    From Newsgroup: comp.lang.cobol

    On Saturday, June 2, 2018 at 2:29:23 PM UTC+12, robin....@gmail.com wrote:
    On Saturday, June 2, 2018 at 7:19:51 AM UTC+10, Richard wrote:
    On Tuesday, December 12, 2017 at 1:50:21 AM UTC+13, JM wrote:
    01 aa pic s999 value -123.


    In Fjitsu the field "aa" has the value in HEXA of "31 32 53" in MF the same field in HEXA has the following value "31 32 4C".
    There is some switch/command for fujitsu has the same behavior as MF

    The decimal representation

    You mean, binary representation.

    Do I really mean that? I think not.

    PIC S999 is _decimal_ numeric (though it could be overridden at the group level).

    HEX "31 32 53" (or 4C) are the ASCII values for the decimal characters '1', '2' and '3' with an extra bit set for negative.

    123 in binary would be hex 7B with -123 the compliment of that: hex FF FF 85


    for negative numbers is different between the two compilers,
    and and may be different again for other compilers.
    This can be overcome by using SIGN SEPARATE.

    Not if the internal representations are still different.

    Do you know any compiler that doesn't use '-' as sign separate for negative ?

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From robin.vowels@robin.vowels@gmail.com to comp.lang.cobol on Sat Jun 2 17:05:46 2018
    From Newsgroup: comp.lang.cobol

    On Sunday, June 3, 2018 at 7:42:36 AM UTC+10, Richard wrote:
    On Saturday, June 2, 2018 at 2:29:23 PM UTC+12, robin....@gmail.com wrote:
    On Saturday, June 2, 2018 at 7:19:51 AM UTC+10, Richard wrote:
    On Tuesday, December 12, 2017 at 1:50:21 AM UTC+13, JM wrote:
    01 aa pic s999 value -123.


    In Fjitsu the field "aa" has the value in HEXA of "31 32 53" in MF the same field in HEXA has the following value "31 32 4C".
    There is some switch/command for fujitsu has the same behavior as MF

    The decimal representation

    You mean, binary representation.

    Do I really mean that? I think not.

    PIC S999 is _decimal_ numeric (though it could be overridden at the group level).

    HEX "31 32 53" (or 4C) are the ASCII values for the decimal characters '1', '2' and '3'

    Precisely. These are BINARY vaues.

    with an extra bit set for negative.

    123 in binary would be hex 7B with -123 the compliment of that: hex FF FF 85

    These are binary also.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard@riplin@azonic.co.nz to comp.lang.cobol on Sat Jun 2 21:44:38 2018
    From Newsgroup: comp.lang.cobol

    On Sunday, June 3, 2018 at 12:05:47 PM UTC+12, robin....@gmail.com wrote:
    On Sunday, June 3, 2018 at 7:42:36 AM UTC+10, Richard wrote:
    On Saturday, June 2, 2018 at 2:29:23 PM UTC+12, robin....@gmail.com wrote:
    On Saturday, June 2, 2018 at 7:19:51 AM UTC+10, Richard wrote:
    On Tuesday, December 12, 2017 at 1:50:21 AM UTC+13, JM wrote:
    01 aa pic s999 value -123.


    In Fjitsu the field "aa" has the value in HEXA of "31 32 53" in MF the same field in HEXA has the following value "31 32 4C".
    There is some switch/command for fujitsu has the same behavior as MF

    The decimal representation

    You mean, binary representation.

    Do I really mean that? I think not.

    PIC S999 is _decimal_ numeric (though it could be overridden at the group level).

    HEX "31 32 53" (or 4C) are the ASCII values for the decimal characters '1', '2' and '3'

    Precisely. These are BINARY vaues.

    No. They are ASCII code values. They are arbitrary bit patterns used to represent various symbols.

    with an extra bit set for negative.

    123 in binary would be hex 7B with -123 the compliment of that: hex FF FF 85

    These are binary also.

    These _are_ binary values and are quite different and distinct from ASCII codes in form and function.

    You seem to think that _all_ bit patterns are 'binary values'. The bit patterns of Hex '31 32 53' is _not_ a 'binary value' but is a representation in ASCII codes of the _decimal_ value -123.

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From robin.vowels@robin.vowels@gmail.com to comp.lang.cobol on Sun Jun 3 00:23:13 2018
    From Newsgroup: comp.lang.cobol

    On Sunday, June 3, 2018 at 2:44:39 PM UTC+10, Richard wrote:
    On Sunday, June 3, 2018 at 12:05:47 PM UTC+12, r....@gmail.com wrote:
    On Sunday, June 3, 2018 at 7:42:36 AM UTC+10, Richard wrote:
    On Saturday, June 2, 2018 at 2:29:23 PM UTC+12, r....@gmail.com wrote:
    On Saturday, June 2, 2018 at 7:19:51 AM UTC+10, Richard wrote:
    On Tuesday, December 12, 2017 at 1:50:21 AM UTC+13, JM wrote:
    01 aa pic s999 value -123.


    In Fjitsu the field "aa" has the value in HEXA of "31 32 53" in MF the same field in HEXA has the following value "31 32 4C".
    There is some switch/command for fujitsu has the same behavior as MF

    The decimal representation

    You mean, binary representation.

    Do I really mean that? I think not.

    PIC S999 is _decimal_ numeric (though it could be overridden at the group level).

    HEX "31 32 53" (or 4C) are the ASCII values for the decimal characters '1', '2' and '3'

    Precisely. These are BINARY vaues.

    No. They are ASCII code values. They are arbitrary bit patterns used to represent various symbols.

    It doesn't matter what they are called, they are still binary values.
    ASCII values are binary values. Always have been.

    with an extra bit set for negative.

    123 in binary would be hex 7B with -123 the compliment of that: hex FF FF 85

    These are binary also.

    These _are_ binary values and are quite different and distinct from ASCII codes in form and function.

    You seem to think that _all_ bit patterns are 'binary values'.

    They are, because virtually all machines store values in binary or
    binary coded decimal (BCD) which is still binary.

    The bit patterns of Hex '31 32 53' is _not_ a 'binary value' but is a representation in ASCII codes of the _decimal_ value -123.

    They are three binary values, each of 8 binary digits.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From pete dashwood@dashwood@enternet.co.nz to comp.lang.cobol on Sun Jun 3 19:52:17 2018
    From Newsgroup: comp.lang.cobol

    On 3/06/2018 7:23 PM, robin.vowels@gmail.com wrote:
    On Sunday, June 3, 2018 at 2:44:39 PM UTC+10, Richard wrote:
    On Sunday, June 3, 2018 at 12:05:47 PM UTC+12, r....@gmail.com wrote:
    On Sunday, June 3, 2018 at 7:42:36 AM UTC+10, Richard wrote:
    On Saturday, June 2, 2018 at 2:29:23 PM UTC+12, r....@gmail.com wrote: >>>>> On Saturday, June 2, 2018 at 7:19:51 AM UTC+10, Richard wrote:
    On Tuesday, December 12, 2017 at 1:50:21 AM UTC+13, JM wrote:
    01 aa pic s999 value -123.


    In Fjitsu the field "aa" has the value in HEXA of "31 32 53" in MF the same field in HEXA has the following value "31 32 4C".
    There is some switch/command for fujitsu has the same behavior as MF >>>>>>
    The decimal representation

    You mean, binary representation.

    Do I really mean that? I think not.

    PIC S999 is _decimal_ numeric (though it could be overridden at the group level).

    HEX "31 32 53" (or 4C) are the ASCII values for the decimal characters '1', '2' and '3'

    Precisely. These are BINARY vaues.

    No. They are ASCII code values. They are arbitrary bit patterns used to represent various symbols.

    It doesn't matter what they are called, they are still binary values.
    ASCII values are binary values. Always have been.

    with an extra bit set for negative.

    123 in binary would be hex 7B with -123 the compliment of that: hex FF FF 85

    These are binary also.

    These _are_ binary values and are quite different and distinct from ASCII codes in form and function.

    You seem to think that _all_ bit patterns are 'binary values'.

    They are, because virtually all machines store values in binary or
    binary coded decimal (BCD) which is still binary.

    The bit patterns of Hex '31 32 53' is _not_ a 'binary value' but is a representation in ASCII codes of the _decimal_ value -123.

    They are three binary values, each of 8 binary digits.

    I think the problem here is different meanings of the word "value".

    You (Robin) are correct that a string of bits will always have a representation of some number, which is the number that the bits
    represent when placed in order.

    But Richard is also correct when he says that ASCII codes DON'T
    (necessarily) represent a "number", rather they represent a character
    (or a string of characters) because that is how we choose to interpret
    them.

    The Hex representation 313253 has a "value" of 3224147 in decimal, so
    you could argue that is its "binary value" (but, strictly speaking, a
    "binary value" can only ever be 0 or 1, so that is really a decimal
    value, represented in binary...) It just so happens that the same series
    of bits can also be an ASCII representation of the decimal number -123
    (using the signed overpunch convention.)

    It would probably help if we used a different word when interpreting a
    binary string as ASCII (or EBCDIC or any other "coded" value...), than
    when interpreting it as a number...

    Instead of speaking of the "value" of the string, what about its "ASCII representation" or its "numeric value"?

    Pete.
    --
    I used to write COBOL; now I can do anything...
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard@riplin@azonic.co.nz to comp.lang.cobol on Sun Jun 3 02:37:30 2018
    From Newsgroup: comp.lang.cobol

    On Sunday, June 3, 2018 at 7:23:14 PM UTC+12, robin....@gmail.com wrote:
    On Sunday, June 3, 2018 at 2:44:39 PM UTC+10, Richard wrote:
    On Sunday, June 3, 2018 at 12:05:47 PM UTC+12, r....@gmail.com wrote:
    On Sunday, June 3, 2018 at 7:42:36 AM UTC+10, Richard wrote:
    On Saturday, June 2, 2018 at 2:29:23 PM UTC+12, r....@gmail.com wrote:
    On Saturday, June 2, 2018 at 7:19:51 AM UTC+10, Richard wrote:
    On Tuesday, December 12, 2017 at 1:50:21 AM UTC+13, JM wrote:
    01 aa pic s999 value -123.


    In Fjitsu the field "aa" has the value in HEXA of "31 32 53" in MF the same field in HEXA has the following value "31 32 4C".
    There is some switch/command for fujitsu has the same behavior as MF

    The decimal representation

    You mean, binary representation.

    Do I really mean that? I think not.

    PIC S999 is _decimal_ numeric (though it could be overridden at the group level).

    HEX "31 32 53" (or 4C) are the ASCII values for the decimal characters '1', '2' and '3'

    Precisely. These are BINARY vaues.

    No. They are ASCII code values. They are arbitrary bit patterns used to represent various symbols.

    It doesn't matter what they are called, they are still binary values.
    ASCII values are binary values. Always have been.
    ASCII codes are bit patterns, not 'values'. A 'B' is not more 'valuable' than an 'A'.
    with an extra bit set for negative.

    123 in binary would be hex 7B with -123 the compliment of that: hex FF FF 85

    These are binary also.

    These _are_ binary values and are quite different and distinct from ASCII codes in form and function.

    You seem to think that _all_ bit patterns are 'binary values'.

    They are, because virtually all machines store values in binary or
    binary coded decimal (BCD) which is still binary.

    Actually, a particular bit pattern can have a _different_ 'value' depending on whether it is interpreted as binary or as BCD. For example 00010000 has a binary value of 256 and a BCD value of 10. Claiming that everything _is_ a 'binary value' is not useful. 'Value' depends on interpretation.
    The bit patterns of Hex '31 32 53' is _not_ a 'binary value' but is a representation in ASCII codes of the _decimal_ value -123.

    They are three binary values, each of 8 binary digits.
    This is a COBOL discussion. A PIC S999 field in a COBOL program uses ASCII (or other) _decimal_ digits to represent the numeric _value_. Thus the field has a decimal value. Interpreting the bit patterns as a 'binary value' is incorrect and irrelevant, that is _not_ the numeric 'value' of the field.
    As I originally said: Hex 31 32 53 is the _DECIMAL_ representation in ASCII codes of the value -123. It has no other meaningful 'value'.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From robin.vowels@robin.vowels@gmail.com to comp.lang.cobol on Sun Jun 3 03:26:07 2018
    From Newsgroup: comp.lang.cobol

    On Sunday, June 3, 2018 at 7:37:31 PM UTC+10, Richard wrote:
    On Sunday, June 3, 2018 at 7:23:14 PM UTC+12, robin....@gmail.com wrote:
    On Sunday, June 3, 2018 at 2:44:39 PM UTC+10, Richard wrote:
    On Sunday, June 3, 2018 at 12:05:47 PM UTC+12, r....@gmail.com wrote:
    On Sunday, June 3, 2018 at 7:42:36 AM UTC+10, Richard wrote:
    On Saturday, June 2, 2018 at 2:29:23 PM UTC+12, r....@gmail.com wrote:
    On Saturday, June 2, 2018 at 7:19:51 AM UTC+10, Richard wrote:
    On Tuesday, December 12, 2017 at 1:50:21 AM UTC+13, JM wrote:
    01 aa pic s999 value -123.


    In Fjitsu the field "aa" has the value in HEXA of "31 32 53" in MF the same field in HEXA has the following value "31 32 4C".
    There is some switch/command for fujitsu has the same behavior as MF

    The decimal representation

    You mean, binary representation.

    Do I really mean that? I think not.

    PIC S999 is _decimal_ numeric (though it could be overridden at the group level).

    HEX "31 32 53" (or 4C) are the ASCII values for the decimal characters '1', '2' and '3'

    Precisely. These are BINARY vaues.

    No. They are ASCII code values. They are arbitrary bit patterns used to represent various symbols.

    It doesn't matter what they are called, they are still binary values.
    ASCII values are binary values. Always have been.

    ASCII codes are bit patterns, not 'values'.
    They are values.
    A 'B' is not more 'valuable' than an 'A'.
    A 'B' ranks higher than an 'A' because the internal binary value of 'A'
    ranks higher than the internal binary value of 'B'.
    with an extra bit set for negative.

    123 in binary would be hex 7B with -123 the compliment of that: hex FF FF 85

    These are binary also.

    These _are_ binary values and are quite different and distinct from ASCII codes in form and function.

    You seem to think that _all_ bit patterns are 'binary values'.

    They are, because virtually all machines store values in binary or
    binary coded decimal (BCD) which is still binary.

    Actually, a particular bit pattern can have a _different_ 'value'
    depending on whether it is interpreted as binary or as BCD.
    For example 00010000 has a binary value of 256
    It does? Looks like the binary 8 to me.
    and a BCD value of 10.
    It does? Still looks like the binary value 8 to me.
    Claiming that everything _is_ a 'binary value' is not useful.
    'Value' depends on interpretation.


    The bit patterns of Hex '31 32 53' is _not_ a 'binary value' but is a representation in ASCII codes of the _decimal_ value -123.
    They are three binary values, each of 8 binary digits.

    This is a COBOL discussion. A PIC S999 field in a COBOL program uses ASCII (or other) _decimal_ digits to represent the numeric _value_. Thus the field has a decimal value. Interpreting the bit patterns as a 'binary value' is incorrect and irrelevant, that is _not_ the numeric 'value' of the field.

    As I originally said: Hex 31 32 53 is the _DECIMAL_ representation
    No, it is the hexadecimal representation.
    That's what 'Hex' means. And you wrote the (internal) hexadecimal
    digits corresponding to the external value below (-123).
    in ASCII codes of the value -123. It has no other
    meaningful 'value'.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard@riplin@azonic.co.nz to comp.lang.cobol on Sun Jun 3 13:28:10 2018
    From Newsgroup: comp.lang.cobol

    On Sunday, June 3, 2018 at 10:26:08 PM UTC+12, robin....@gmail.com wrote:
    On Sunday, June 3, 2018 at 7:37:31 PM UTC+10, Richard wrote:
    On Sunday, June 3, 2018 at 7:23:14 PM UTC+12, robin....@gmail.com wrote:
    On Sunday, June 3, 2018 at 2:44:39 PM UTC+10, Richard wrote:
    On Sunday, June 3, 2018 at 12:05:47 PM UTC+12, r....@gmail.com wrote:
    On Sunday, June 3, 2018 at 7:42:36 AM UTC+10, Richard wrote:
    On Saturday, June 2, 2018 at 2:29:23 PM UTC+12, r....@gmail.com wrote:
    On Saturday, June 2, 2018 at 7:19:51 AM UTC+10, Richard wrote:
    On Tuesday, December 12, 2017 at 1:50:21 AM UTC+13, JM wrote:
    01 aa pic s999 value -123.


    In Fjitsu the field "aa" has the value in HEXA of "31 32 53" in MF the same field in HEXA has the following value "31 32 4C".
    There is some switch/command for fujitsu has the same behavior as MF

    The decimal representation

    You mean, binary representation.

    Do I really mean that? I think not.

    PIC S999 is _decimal_ numeric (though it could be overridden at the group level).

    HEX "31 32 53" (or 4C) are the ASCII values for the decimal characters '1', '2' and '3'

    Precisely. These are BINARY vaues.

    No. They are ASCII code values. They are arbitrary bit patterns used to represent various symbols.

    It doesn't matter what they are called, they are still binary values. ASCII values are binary values. Always have been.

    ASCII codes are bit patterns, not 'values'.

    They are values.

    A 'B' is not more 'valuable' than an 'A'.

    A 'B' ranks higher than an 'A' because the internal binary value of 'A'
    ranks higher than the internal binary value of 'B'.

    with an extra bit set for negative.

    123 in binary would be hex 7B with -123 the compliment of that: hex FF FF 85

    These are binary also.

    These _are_ binary values and are quite different and distinct from ASCII codes in form and function.

    You seem to think that _all_ bit patterns are 'binary values'.

    They are, because virtually all machines store values in binary or
    binary coded decimal (BCD) which is still binary.

    Actually, a particular bit pattern can have a _different_ 'value'
    depending on whether it is interpreted as binary or as BCD.
    For example 00010000 has a binary value of 256

    It does? Looks like the binary 8 to me.
    We _both_ got that wrong, the decimal value of binary 00010000 is 16.
    and a BCD value of 10.

    It does? Still looks like the binary value 8 to me.
    As I said, the 'value' depends on the representation. Many computers have instructions to process BCD or even decimal directly. To these, and to normal humans, that _coded_ bit pattern has a _value_ of 10.
    Claiming that everything _is_ a 'binary value' is not useful.
    'Value' depends on interpretation.


    The bit patterns of Hex '31 32 53' is _not_ a 'binary value' but is a representation in ASCII codes of the _decimal_ value -123.

    They are three binary values, each of 8 binary digits.

    This is a COBOL discussion. A PIC S999 field in a COBOL program uses ASCII (or other) _decimal_ digits to represent the numeric _value_. Thus the field has a decimal value. Interpreting the bit patterns as a 'binary value' is incorrect and irrelevant, that is _not_ the numeric 'value' of the field.

    As I originally said: Hex 31 32 53 is the _DECIMAL_ representation

    No, it is the hexadecimal representation.
    That's what 'Hex' means. And you wrote the (internal) hexadecimal
    digits corresponding to the external value below (-123).

    in ASCII codes of the value -123. It has no other
    meaningful 'value'.
    You have disingenuously broken my statement to suit your own agenda. What I said is that it is "the _DECIMAL_ representation _in_ASCII_codes_ of the value -123". You deliberately broke the quote to argue against "the _DECIMAL_ representation" so that you could ignore the qualification of "in ASCII codes".
    You should argue against what I actually say, and not what you wanted me to have said.
    ASCII is the acronym for American Standard CODES for Information Interchange. BCD is Binary CODED Decimal. These are treated by the computer as _codes_, not 'values'. The 'value' of a field depends on the representation used.
    A byte field with a bit pattern of, say, 01010011 has several different possible 'values'.
    The 'binary value' is 1010011 or decimal 83
    The 'hex value' is x53
    The 'ASCII value' is "S"
    The 'BCD value' is 53
    The (MF ASCII coded) 'Decimal value' is -3
    The _value_ of the _field_ depends on the representation used. The field 'aa' in the example, with a picture of S999 has a _value_ of -123. It does not have "3 binary values", it has a single value, it has 3 ASCII codes each of which is represented by a bit pattern.
    In particular 'binary value' is ambiguous. The bit pattern of 10000000 may have a 'binary value' of 128 or of -127.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From robin.vowels@robin.vowels@gmail.com to comp.lang.cobol on Wed Jun 6 20:18:54 2018
    From Newsgroup: comp.lang.cobol

    On Monday, June 4, 2018 at 6:28:12 AM UTC+10, Richard wrote:
    On Sunday, June 3, 2018 at 10:26:08 PM UTC+12, r.....@gmail.com wrote:
    On Sunday, June 3, 2018 at 7:37:31 PM UTC+10, Richard wrote:
    On Sunday, June 3, 2018 at 7:23:14 PM UTC+12, r.....@gmail.com wrote:
    On Sunday, June 3, 2018 at 2:44:39 PM UTC+10, Richard wrote:
    On Sunday, June 3, 2018 at 12:05:47 PM UTC+12, r....@gmail.com wrote:
    On Sunday, June 3, 2018 at 7:42:36 AM UTC+10, Richard wrote:
    On Saturday, June 2, 2018 at 2:29:23 PM UTC+12, r....@gmail.com wrote:
    On Saturday, June 2, 2018 at 7:19:51 AM UTC+10, Richard wrote:
    On Tuesday, December 12, 2017 at 1:50:21 AM UTC+13, JM wrote:
    01 aa pic s999 value -123.


    In Fjitsu the field "aa" has the value in HEXA of "31 32 53" in MF
    the same field in HEXA has the following value "31 32 4C". There is some switch/command for fujitsu has the same behavior as MF

    The decimal representation

    You mean, binary representation.

    Do I really mean that? I think not.

    PIC S999 is _decimal_ numeric (though it could be overridden at the group level).

    HEX "31 32 53" (or 4C) are the ASCII values for the decimal characters '1', '2' and '3'

    Precisely. These are BINARY vaues.

    No. They are ASCII code values. They are arbitrary bit patterns used to represent various symbols.

    It doesn't matter what they are called, they are still binary values. ASCII values are binary values. Always have been.

    ASCII codes are bit patterns, not 'values'.

    They are values.

    A 'B' is not more 'valuable' than an 'A'.

    A 'B' ranks higher than an 'A' because the internal binary value of 'A' ranks higher than the internal binary value of 'B'.

    with an extra bit set for negative.

    123 in binary would be hex 7B with -123 the compliment of that: hex FF FF 85

    These are binary also.

    These _are_ binary values and are quite different and distinct from ASCII codes in form and function.

    You seem to think that _all_ bit patterns are 'binary values'.

    They are, because virtually all machines store values in binary or binary coded decimal (BCD) which is still binary.

    Actually, a particular bit pattern can have a _different_ 'value' depending on whether it is interpreted as binary or as BCD.
    For example 00010000 has a binary value of 256

    It does? Looks like the binary 8 to me.

    We _both_ got that wrong,
    You did [get it wrong], I didn't. It just so happened that I have been working on a long-term project involving Chinese binary (binary in reverse),
    and without thinking recognised it as 8.
    the decimal value of binary 00010000 is 16.

    and a BCD value of 10.

    It does? Still looks like the binary value 8 to me.

    As I said, the 'value' depends on the representation.
    Many computers have instructions to process BCD or even decimal directly.
    Name one. [Decimal computers went out in the 1950s, or earlier]
    To these, and to normal humans, that _coded_ bit pattern has a _value_ of 10.

    Claiming that everything _is_ a 'binary value' is not useful.
    Well, everything is a binary value.
    'Value' depends on interpretation.

    The bit patterns of Hex '31 32 53' is _not_ a 'binary value' but is a representation in ASCII codes of the _decimal_ value -123.

    They are three binary values, each of 8 binary digits.

    This is a COBOL discussion. A PIC S999 field in a COBOL program uses ASCII (or other) _decimal_ digits to represent the numeric _value_. Thus the field has a decimal value. Interpreting the bit patterns as a 'binary value' is incorrect and irrelevant, that is _not_ the numeric 'value' of the field.

    As I originally said: Hex 31 32 53 is the _DECIMAL_ representation

    No, it is the hexadecimal representation.
    That's what 'Hex' means. And you wrote the (internal) hexadecimal
    digits corresponding to the external value below (-123).

    in ASCII codes of the value -123. It has no other
    meaningful 'value'.

    You have disingenuously broken my statement to suit your own agenda. What I said is that it is "the _DECIMAL_ representation _in_ASCII_codes_ of the value -123". You deliberately broke the quote to argue against "the _DECIMAL_ representation" so that you could ignore the qualification of "in ASCII codes".
    I did nothing of the sort. And I don't like your derogatory statement.
    I did not delete any part of your statement, and I corrected your wrong statement
    at the point where it was wrong.
    You should argue against what I actually say,
    I did.
    and not what you wanted me to have said.
    If I wanted you to say something, it would be that I wanted you to say something that was correct -- which is why I corrected you.
    ASCII is the acronym for American Standard CODES for Information Interchange.
    No, "ASCII [is] abbreviated from American Standard Code for Information Interchange"
    according to Wiki.
    BCD is Binary CODED Decimal.
    I know what ASCII and BCD are. Those acronyms have been around since the 1960s.
    These are treated by the computer as _codes_,
    No so. The computer treats them as values.
    not 'values'. The 'value' of a field depends on the representation used.
    The value depends on how we wish to interpret the binary value (or values),
    and how the computer has been designed to interpret it (them).

    A byte field with a bit pattern of, say, 01010011 has several different possible 'values'.
    The 'binary value' is 1010011 or decimal 83
    The binary value is 01010011, as it is the content of a byte (8 bits).
    The 'hex value' is x53
    You mean that the hexadecimal form is x53. It is the same value as 01010011.
    The 'ASCII value' is "S"
    If treated as ASCII it represents "S".
    The 'BCD value' is 53
    If treated as BCD, it represents the decimal value 53.
    The (MF ASCII coded) 'Decimal value' is -3
    Hex, ASCII, and BCD have been around since the 1960s.
    The _value_ of the _field_ depends on the representation used.
    The field 'aa' in the example, with a picture of S999 has a _value_ of -123. It does not have "3 binary values",
    It does; it is made of 3 bytes, each having a binary value.
    it has a single value, it has 3 ASCII codes each of which is represented by a bit pattern.
    It has only 2 ASCII values, the third is not; it is an encoded byte having
    a sign incorporated with a BCD digit.
    In particular 'binary value' is ambiguous.
    It never is ambiguous.
    The bit pattern of 10000000 may have a 'binary value' of 128 or of -127.
    Or even (minus) zero. It depends on how we want to interpret that binary value, and how the computer has been designed to interpret it.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard@riplin@azonic.co.nz to comp.lang.cobol on Wed Jun 6 20:49:39 2018
    From Newsgroup: comp.lang.cobol

    On Thursday, June 7, 2018 at 3:18:55 PM UTC+12, robin....@gmail.com wrote:
    On Monday, June 4, 2018 at 6:28:12 AM UTC+10, Richard wrote:
    On Sunday, June 3, 2018 at 10:26:08 PM UTC+12, r.....@gmail.com wrote:
    On Sunday, June 3, 2018 at 7:37:31 PM UTC+10, Richard wrote:
    On Sunday, June 3, 2018 at 7:23:14 PM UTC+12, r.....@gmail.com wrote:
    On Sunday, June 3, 2018 at 2:44:39 PM UTC+10, Richard wrote:
    On Sunday, June 3, 2018 at 12:05:47 PM UTC+12, r....@gmail.com wrote:
    On Sunday, June 3, 2018 at 7:42:36 AM UTC+10, Richard wrote:
    On Saturday, June 2, 2018 at 2:29:23 PM UTC+12, r....@gmail.com wrote:
    On Saturday, June 2, 2018 at 7:19:51 AM UTC+10, Richard wrote:
    On Tuesday, December 12, 2017 at 1:50:21 AM UTC+13, JM wrote:
    01 aa pic s999 value -123.


    In Fjitsu the field "aa" has the value in HEXA of "31 32 53" in MF
    the same field in HEXA has the following value "31 32 4C".
    There is some switch/command for fujitsu has the same behavior as MF

    The decimal representation

    You mean, binary representation.

    Do I really mean that? I think not.

    PIC S999 is _decimal_ numeric (though it could be overridden at the group level).

    HEX "31 32 53" (or 4C) are the ASCII values for the decimal characters '1', '2' and '3'

    Precisely. These are BINARY vaues.

    No. They are ASCII code values. They are arbitrary bit patterns used to represent various symbols.

    It doesn't matter what they are called, they are still binary values. ASCII values are binary values. Always have been.

    ASCII codes are bit patterns, not 'values'.

    They are values.

    A 'B' is not more 'valuable' than an 'A'.

    A 'B' ranks higher than an 'A' because the internal binary value of 'A' ranks higher than the internal binary value of 'B'.

    with an extra bit set for negative.

    123 in binary would be hex 7B with -123 the compliment of that: hex FF FF 85

    These are binary also.

    These _are_ binary values and are quite different and distinct from ASCII codes in form and function.

    You seem to think that _all_ bit patterns are 'binary values'.

    They are, because virtually all machines store values in binary or binary coded decimal (BCD) which is still binary.

    Actually, a particular bit pattern can have a _different_ 'value' depending on whether it is interpreted as binary or as BCD.
    For example 00010000 has a binary value of 256

    It does? Looks like the binary 8 to me.

    We _both_ got that wrong,

    You did [get it wrong], I didn't. It just so happened that I have been working
    on a long-term project involving Chinese binary (binary in reverse),
    and without thinking recognised it as 8.

    "without thinking" is noted.
    the decimal value of binary 00010000 is 16.

    and a BCD value of 10.

    It does? Still looks like the binary value 8 to me.

    As I said, the 'value' depends on the representation.
    Many computers have instructions to process BCD or even decimal directly.

    Name one. [Decimal computers went out in the 1950s, or earlier]

    Actually they did not 'go out' in the 50s. The ICL System 25 was decimal machine in the 1980s.
    To these, and to normal humans, that _coded_ bit pattern has a _value_ of 10.

    Claiming that everything _is_ a 'binary value' is not useful.

    Well, everything is a binary value.

    'Value' depends on interpretation.

    The bit patterns of Hex '31 32 53' is _not_ a 'binary value' but is a representation in ASCII codes of the _decimal_ value -123.

    They are three binary values, each of 8 binary digits.

    This is a COBOL discussion. A PIC S999 field in a COBOL program uses ASCII (or other) _decimal_ digits to represent the numeric _value_. Thus the field has a decimal value. Interpreting the bit patterns as a 'binary value' is incorrect and irrelevant, that is _not_ the numeric 'value' of the field.

    As I originally said: Hex 31 32 53 is the _DECIMAL_ representation

    No, it is the hexadecimal representation.
    That's what 'Hex' means. And you wrote the (internal) hexadecimal
    digits corresponding to the external value below (-123).

    in ASCII codes of the value -123. It has no other
    meaningful 'value'.

    You have disingenuously broken my statement to suit your own agenda. What I said is that it is "the _DECIMAL_ representation _in_ASCII_codes_ of the value -123". You deliberately broke the quote to argue against "the _DECIMAL_ representation" so that you could ignore the qualification of "in ASCII codes".

    I did nothing of the sort. And I don't like your derogatory statement.

    I did not delete any part of your statement,
    I didn't say that you deleted it, I said that you broke it so that you could argue with a part of the statement.
    and I corrected your wrong statement
    at the point where it was wrong.

    You only thought it was 'wrong' because you only argued against the first part without the qualifier. That was disingenuous.
    You should argue against what I actually say,

    I did.

    and not what you wanted me to have said.

    If I wanted you to say something, it would be that I wanted you to say something that was correct -- which is why I corrected you.

    ASCII is the acronym for American Standard CODES for Information Interchange.

    No, "ASCII [is] abbreviated from American Standard Code for Information Interchange"
    according to Wiki.

    BCD is Binary CODED Decimal.

    I know what ASCII and BCD are. Those acronyms have been around since the 1960s.

    These are treated by the computer as _codes_,

    No so. The computer treats them as values.

    not 'values'. The 'value' of a field depends on the representation used.

    The value depends on how we wish to interpret the binary value (or values), and how the computer has been designed to interpret it (them).

    A byte field with a bit pattern of, say, 01010011 has several different possible 'values'.
    The 'binary value' is 1010011 or decimal 83

    The binary value is 01010011, as it is the content of a byte (8 bits).

    The 'hex value' is x53

    You mean that the hexadecimal form is x53. It is the same value as 01010011.

    The 'ASCII value' is "S"

    If treated as ASCII it represents "S".

    The 'BCD value' is 53

    If treated as BCD, it represents the decimal value 53.

    The (MF ASCII coded) 'Decimal value' is -3

    Hex, ASCII, and BCD have been around since the 1960s.

    The _value_ of the _field_ depends on the representation used.
    The field 'aa' in the example, with a picture of S999 has a _value_ of -123.
    It does not have "3 binary values",

    It does; it is made of 3 bytes, each having a binary value.

    it has a single value, it has 3 ASCII codes each of which is represented by a bit pattern.

    It has only 2 ASCII values, the third is not; it is an encoded byte having
    a sign incorporated with a BCD digit.

    In particular 'binary value' is ambiguous.

    It never is ambiguous.

    The bit pattern of 10000000 may have a 'binary value' of 128 or of -127.

    Or even (minus) zero. It depends on how we want to interpret that binary value,
    and how the computer has been designed to interpret it.
    Which is exactly why it is useless to claim that "everything is a binary value".
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From robin.vowels@robin.vowels@gmail.com to comp.lang.cobol on Thu Jun 7 01:18:19 2018
    From Newsgroup: comp.lang.cobol

    On Thursday, June 7, 2018 at 1:49:40 PM UTC+10, Richard wrote:
    On Thursday, June 7, 2018 at 3:18:55 PM UTC+12, r......@gmail.com wrote:
    On Monday, June 4, 2018 at 6:28:12 AM UTC+10, Richard wrote:

    As I said, the 'value' depends on the representation.
    Many computers have instructions to process BCD or even decimal directly.

    Name one. [Decimal computers went out in the 1950s, or earlier]

    Actually they did not 'go out' in the 50s. The ICL System 25 was decimal machine in the 1980s.

    The ICL System 25 was binary machine, with data held in 8-bit bytes,
    with the least-significant 4 bits holding a BCD value.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Bill Gunshannon@bill.gunshannon@gmail.com to comp.lang.cobol on Thu Jun 7 07:23:04 2018
    From Newsgroup: comp.lang.cobol

    On 06/06/2018 11:18 PM, robin.vowels@gmail.com wrote:

    Many computers have instructions to process BCD or even decimal directly.

    Name one. [Decimal computers went out in the 1950s, or earlier]


    IBM System 390

    I am sure there are others.

    bill

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From robin.vowels@robin.vowels@gmail.com to comp.lang.cobol on Thu Jun 7 04:47:09 2018
    From Newsgroup: comp.lang.cobol

    On Thursday, June 7, 2018 at 9:23:06 PM UTC+10, Bill Gunshannon wrote:
    On 06/06/2018 11:18 PM, r.....@gmail.com wrote:

    Many computers have instructions to process BCD or even decimal directly.

    Name one. [Decimal computers went out in the 1950s, or earlier]


    IBM System 390

    It is not a decimal computer. (However, it can handle BCD arithmetic.)
    In a decimal machine, values are represented by, for examples, cells
    that can take ten distinct voltages to represent the vaues from 0 to 9.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard@riplin@azonic.co.nz to comp.lang.cobol on Thu Jun 7 13:12:51 2018
    From Newsgroup: comp.lang.cobol

    On Thursday, June 7, 2018 at 8:18:20 PM UTC+12, robin....@gmail.com wrote:
    On Thursday, June 7, 2018 at 1:49:40 PM UTC+10, Richard wrote:
    On Thursday, June 7, 2018 at 3:18:55 PM UTC+12, r......@gmail.com wrote:
    On Monday, June 4, 2018 at 6:28:12 AM UTC+10, Richard wrote:

    As I said, the 'value' depends on the representation.
    Many computers have instructions to process BCD or even decimal directly.

    Name one. [Decimal computers went out in the 1950s, or earlier]

    Actually they did not 'go out' in the 50s. The ICL System 25 was decimal machine in the 1980s.

    The ICL System 25 was binary machine, with data held in 8-bit bytes,
    with the least-significant 4 bits holding a BCD value.
    I worked on them when I was at ICL. They followed on from the Singer System 10 (ICL bought SBM). The System 10 had 6 bit characters, for numeric values each held a single decimal character. The System 25 was a more modern implementation with 8bit memory and an extended instruction set. In 'System 10 mode' 6 bits of each byte were used.
    While the lower 4 bits did hold the value the zone bits were set such that the content was actually the display character code of '0' to '9'. The last character of negatives values held an extra zone bit set so they were 'P' to 'Y'.
    In COBOL terms (and there was COBOL for the System 25) these were 'PIC S9(10) DISPLAY' fields that didn't need conversion to another format for arithmetic operations.
    You should know that 'BCD' _is_ Decimal. When it is held as 2 4-bit digits per byte it is 'Packed Decimal' or 'Packed BCD'. When it is held as one digit per memory unit it is just decimal.
    Notice the phrasing of section 3.1.2 in: http://members.iinet.net.au/~daveb/S10/Architecture%20of%20the%20ICL%20System%2025/Pages%2008%2009.pdf
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard@riplin@azonic.co.nz to comp.lang.cobol on Thu Jun 7 13:34:34 2018
    From Newsgroup: comp.lang.cobol

    On Thursday, June 7, 2018 at 11:47:10 PM UTC+12, robin....@gmail.com wrote:
    On Thursday, June 7, 2018 at 9:23:06 PM UTC+10, Bill Gunshannon wrote:
    On 06/06/2018 11:18 PM, r.....@gmail.com wrote:

    Many computers have instructions to process BCD or even decimal directly.

    Name one. [Decimal computers went out in the 1950s, or earlier]


    IBM System 390

    It is not a decimal computer. (However, it can handle BCD arithmetic.)
    In a decimal machine, values are represented by, for examples, cells
    that can take ten distinct voltages to represent the vaues from 0 to 9.

    https://en.wikipedia.org/wiki/Decimal_computer

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From robin.vowels@robin.vowels@gmail.com to comp.lang.cobol on Thu Jun 7 19:45:46 2018
    From Newsgroup: comp.lang.cobol

    On Friday, June 8, 2018 at 6:12:52 AM UTC+10, Richard wrote:
    On Thursday, June 7, 2018 at 8:18:20 PM UTC+12, robin....@gmail.com wrote:
    On Thursday, June 7, 2018 at 1:49:40 PM UTC+10, Richard wrote:
    On Thursday, June 7, 2018 at 3:18:55 PM UTC+12, r......@gmail.com wrote:
    On Monday, June 4, 2018 at 6:28:12 AM UTC+10, Richard wrote:

    As I said, the 'value' depends on the representation.
    Many computers have instructions to process BCD or even decimal directly.

    Name one. [Decimal computers went out in the 1950s, or earlier]

    Actually they did not 'go out' in the 50s. The ICL System 25 was decimal machine in the 1980s.

    The ICL System 25 was binary machine, with data held in 8-bit bytes,
    with the least-significant 4 bits holding a BCD value.

    I worked on them when I was at ICL. They followed on from the Singer System 10 (ICL bought SBM). The System 10 had 6 bit characters, for numeric values each held a single decimal character. The System 25 was a more modern implementation with 8bit memory and an extended instruction set. In 'System 10 mode' 6 bits of each byte were used.

    While the lower 4 bits did hold the value the zone bits were set such that the content was actually the display character code of '0' to '9'. The last character of negatives values held an extra zone bit set so they were 'P' to 'Y'.

    In COBOL terms (and there was COBOL for the System 25) these were 'PIC S9(10) DISPLAY' fields that didn't need conversion to another format for arithmetic operations.

    You should know that 'BCD' _is_ Decimal.
    It isn't, and never has been.
    The ENIAC was a decimal machine, with one of ten circuits to represent
    one of the separate digits 0 to 9.
    When it is held as 2 4-bit digits per byte it is 'Packed Decimal'
    or 'Packed BCD'. When it is held as one digit per memory unit it is
    just decimal.
    That still doesn't make it decimal machine.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard@riplin@azonic.co.nz to comp.lang.cobol on Thu Jun 7 20:40:51 2018
    From Newsgroup: comp.lang.cobol

    On Friday, June 8, 2018 at 2:45:47 PM UTC+12, robin....@gmail.com wrote:
    On Friday, June 8, 2018 at 6:12:52 AM UTC+10, Richard wrote:
    On Thursday, June 7, 2018 at 8:18:20 PM UTC+12, robin....@gmail.com wrote:
    On Thursday, June 7, 2018 at 1:49:40 PM UTC+10, Richard wrote:
    On Thursday, June 7, 2018 at 3:18:55 PM UTC+12, r......@gmail.com wrote:
    On Monday, June 4, 2018 at 6:28:12 AM UTC+10, Richard wrote:

    As I said, the 'value' depends on the representation.
    Many computers have instructions to process BCD or even decimal directly.

    Name one. [Decimal computers went out in the 1950s, or earlier]

    Actually they did not 'go out' in the 50s. The ICL System 25 was decimal machine in the 1980s.

    The ICL System 25 was binary machine, with data held in 8-bit bytes,
    with the least-significant 4 bits holding a BCD value.

    I worked on them when I was at ICL. They followed on from the Singer System 10 (ICL bought SBM). The System 10 had 6 bit characters, for numeric values each held a single decimal character. The System 25 was a more modern implementation with 8bit memory and an extended instruction set. In 'System 10 mode' 6 bits of each byte were used.

    While the lower 4 bits did hold the value the zone bits were set such that the content was actually the display character code of '0' to '9'. The last character of negatives values held an extra zone bit set so they were 'P' to 'Y'.

    In COBOL terms (and there was COBOL for the System 25) these were 'PIC S9(10) DISPLAY' fields that didn't need conversion to another format for arithmetic operations.

    You should know that 'BCD' _is_ Decimal.

    It isn't, and never has been.

    It is in the name, the 'D' stands for Decimal !
    The ENIAC was a decimal machine, with one of ten circuits to represent
    one of the separate digits 0 to 9.

    When it is held as 2 4-bit digits per byte it is 'Packed Decimal'
    or 'Packed BCD'. When it is held as one digit per memory unit it is
    just decimal.

    That still doesn't make it decimal machine.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From robin.vowels@robin.vowels@gmail.com to comp.lang.cobol on Fri Jun 8 00:01:09 2018
    From Newsgroup: comp.lang.cobol

    On Friday, June 8, 2018 at 1:40:53 PM UTC+10, Richard wrote:
    On Friday, June 8, 2018 at 2:45:47 PM UTC+12, robin....@gmail.com wrote:
    On Friday, June 8, 2018 at 6:12:52 AM UTC+10, Richard wrote:
    On Thursday, June 7, 2018 at 8:18:20 PM UTC+12, robin....@gmail.com wrote:
    On Thursday, June 7, 2018 at 1:49:40 PM UTC+10, Richard wrote:
    On Thursday, June 7, 2018 at 3:18:55 PM UTC+12, r......@gmail.com wrote:
    On Monday, June 4, 2018 at 6:28:12 AM UTC+10, Richard wrote:

    As I said, the 'value' depends on the representation.
    Many computers have instructions to process BCD or even decimal directly.

    Name one. [Decimal computers went out in the 1950s, or earlier]

    Actually they did not 'go out' in the 50s. The ICL System 25 was decimal machine in the 1980s.

    The ICL System 25 was binary machine, with data held in 8-bit bytes, with the least-significant 4 bits holding a BCD value.

    I worked on them when I was at ICL. They followed on from the Singer System 10 (ICL bought SBM). The System 10 had 6 bit characters, for numeric values each held a single decimal character. The System 25 was a more modern implementation with 8bit memory and an extended instruction set. In 'System 10 mode' 6 bits of each byte were used.

    While the lower 4 bits did hold the value the zone bits were set such that the content was actually the display character code of '0' to '9'. The last character of negatives values held an extra zone bit set so they were 'P' to 'Y'.

    In COBOL terms (and there was COBOL for the System 25) these were 'PIC S9(10) DISPLAY' fields that didn't need conversion to another format for arithmetic operations.

    You should know that 'BCD' _is_ Decimal.

    It isn't, and never has been.

    It is in the name, the 'D' stands for Decimal !
    And what do you think 'B' stands for? Could it be BINARY?
    The data is held in BINARY, not decimal.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard@riplin@azonic.co.nz to comp.lang.cobol on Fri Jun 8 12:23:26 2018
    From Newsgroup: comp.lang.cobol

    On Friday, June 8, 2018 at 7:01:10 PM UTC+12, robin....@gmail.com wrote:
    On Friday, June 8, 2018 at 1:40:53 PM UTC+10, Richard wrote:
    On Friday, June 8, 2018 at 2:45:47 PM UTC+12, robin....@gmail.com wrote:
    On Friday, June 8, 2018 at 6:12:52 AM UTC+10, Richard wrote:
    On Thursday, June 7, 2018 at 8:18:20 PM UTC+12, robin....@gmail.com wrote:
    On Thursday, June 7, 2018 at 1:49:40 PM UTC+10, Richard wrote:
    On Thursday, June 7, 2018 at 3:18:55 PM UTC+12, r......@gmail.com wrote:
    On Monday, June 4, 2018 at 6:28:12 AM UTC+10, Richard wrote:

    As I said, the 'value' depends on the representation.
    Many computers have instructions to process BCD or even decimal directly.

    Name one. [Decimal computers went out in the 1950s, or earlier]

    Actually they did not 'go out' in the 50s. The ICL System 25 was decimal machine in the 1980s.

    The ICL System 25 was binary machine, with data held in 8-bit bytes, with the least-significant 4 bits holding a BCD value.

    I worked on them when I was at ICL. They followed on from the Singer System 10 (ICL bought SBM). The System 10 had 6 bit characters, for numeric values each held a single decimal character. The System 25 was a more modern implementation with 8bit memory and an extended instruction set. In 'System 10 mode' 6 bits of each byte were used.

    While the lower 4 bits did hold the value the zone bits were set such that the content was actually the display character code of '0' to '9'. The last character of negatives values held an extra zone bit set so they were 'P' to 'Y'.

    In COBOL terms (and there was COBOL for the System 25) these were 'PIC S9(10) DISPLAY' fields that didn't need conversion to another format for arithmetic operations.

    You should know that 'BCD' _is_ Decimal.

    It isn't, and never has been.

    It is in the name, the 'D' stands for Decimal !

    And what do you think 'B' stands for? Could it be BINARY?

    The data is held in BINARY, not decimal.
    If one has a value in a field of, say, 54326 then if the field was binary the "data is held" as '0b000000001101010000110110'. If the field is packed BCD the the "data is held" as '0b000001010010001101100000' which is _completely_ different (but the same numeric value). It may well be that each decimal digit is _coded_ as a binary bit pattern (that is what the acronym specifies) but the "data is held" in DECIMAL and is quite different from the data being held in binary (as the example shows).
    The point about a decimal instruction set in a computer is that it can do arithmetic operations on "data held in" decimal fields without having to convert to "data held in" binary. (the quotes are to emphasize that I use your terminology). The operations are quite different.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From pete dashwood@dashwood@enternet.co.nz to comp.lang.cobol on Sat Jun 9 12:20:27 2018
    From Newsgroup: comp.lang.cobol

    On 9/06/2018 7:23 AM, Richard wrote:
    <snipped>

    If one has a value in a field of, say, 54326 then if the field was binary the "data is held" as '0b000000001101010000110110'. If the field is packed BCD the the "data is held" as '0b000001010010001101100000' which is _completely_ different (but the same numeric value). It may well be that each decimal digit is _coded_ as a binary bit pattern (that is what the acronym specifies) but the "data is held" in DECIMAL and is quite different from the data being held in binary (as the example shows).

    Can you explain what the significance of the letter b is in the above?
    Is this hex? (Why would one position in the string be "shorthanded" to hex...?) I don't see how a binary string can contain anything other than
    1s or 0s...

    The point about a decimal instruction set in a computer is that it can do arithmetic operations on "data held in" decimal fields without having to convert to "data held in" binary. (the quotes are to emphasize that I use your terminology). The operations are quite different.


    This whole argument comes down to the difference between "coded representation" of binary and "numerical value" of binary.

    "binary value" can only ever be 1 or 0 (Anything other is not, by
    definition, a binary value.)

    As for decimal computers and binary computers, the actual arithmetic is
    done by hardware units than can ONLY recognize binary (1 digit adders).

    On early IBM System 360s, the "Decimal instruction set" was an OPTION
    for people who wanted to use the system for business processing rather
    than the scientific processing it was originally designed for. (Later on
    it was provided as standard for systems being sold into commercial environments. It was implemented on mylar cards called ROS (Read Only Storage), which today we would call "firmware".)


    Extended instruction sets to handle "Packed Decimal" (a form of BCD)
    still (at the lowest level) manipulated the individual bits because
    that's the only form of "coding" that electricity can recognize (current
    flows or it doesn't; a core is magnetized North or South).

    Some interesting points have emerged in the course of this argument but
    it really is going nowhere because there is nowhere for it to go.

    Pete.
    --
    I used to write COBOL; now I can do anything...
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard@riplin@azonic.co.nz to comp.lang.cobol on Fri Jun 8 19:51:55 2018
    From Newsgroup: comp.lang.cobol

    On Saturday, June 9, 2018 at 12:20:34 PM UTC+12, pete dashwood wrote:
    On 9/06/2018 7:23 AM, Richard wrote:
    <snipped>

    If one has a value in a field of, say, 54326 then if the field was binary the "data is held" as '0b000000001101010000110110'. If the field is packed BCD the the "data is held" as '0b000001010010001101100000' which is _completely_ different (but the same numeric value). It may well be that each decimal digit is _coded_ as a binary bit pattern (that is what the acronym specifies) but the "data is held" in DECIMAL and is quite different from the data being held in binary (as the example shows).

    Can you explain what the significance of the letter b is in the above?
    It stands for 'bits', it is to distinguish it from, say, hex that has a 0x or octal with just a 0.
    Is this hex? (Why would one position in the string be "shorthanded" to hex...?) I don't see how a binary string can contain anything other than
    1s or 0s...

    The point about a decimal instruction set in a computer is that it can do arithmetic operations on "data held in" decimal fields without having to convert to "data held in" binary. (the quotes are to emphasize that I use your terminology). The operations are quite different.


    This whole argument comes down to the difference between "coded representation" of binary and "numerical value" of binary.

    "binary value" can only ever be 1 or 0 (Anything other is not, by definition, a binary value.)
    You will notice that the topic is 'Numeric Fields'. The _value_ of a numeric field is most usefully expressed as a number in common notation. They may have a particular bit pattern but expressing that as a string of 1s and 0s is less useful, especially when the bytes may be big-endian, little-endian or big-little-endian.

    As for decimal computers and binary computers, the actual arithmetic is
    done by hardware units than can ONLY recognize binary (1 digit adders).

    No. That is wrong. The hardware unit may work directly on BCD packed or unpacked. With binary fields the overflow is simply from 1 bit to the next (working from right to left), with BCD fields the overflow has to be done in decimal from one nibble or byte to the next. That is, the hardware recognizes decimal.
    Strict binary (8421) is not the only option for Decimal. I actually have some decimal machines here that use Excess-3 (https://en.wikipedia.org/wiki/Excess-3). These are ICL 1500 Series (not the be confused with the ICT 1500 Series of the early 1960s) that were also from Singer and were originally designed and built by Cogar in the mid 70s and sold by ICL until the early 80s. The replacement was the DRS20 (I have some of those too) with a 'retained mode' processor board (... and those) that was powered by Motorola 2901 'bit-slice' CPUs and also used Excess-3.
    I would suggest that those who insist that 'everything is binary' have no idea about Excess-3 or the other bit-patterns used for decimal.
    https://en.wikipedia.org/wiki/Binary-coded_decimal#Basics
    Just to keep these machines on topic for comp.lang.cobol these did actually have a COBOL compiler. The two guys that created Microfocus had previously worked for ICL DataSkill in Reading where they wrote a COBOL compiler and run-time - I have a manual for that.
    The later CIS COBOL from Microfocus had some remarkable similarities - strange that.
    On early IBM System 360s, the "Decimal instruction set" was an OPTION
    for people who wanted to use the system for business processing rather
    than the scientific processing it was originally designed for. (Later on
    it was provided as standard for systems being sold into commercial environments. It was implemented on mylar cards called ROS (Read Only Storage), which today we would call "firmware".)

    Usually, what is done in hardware in the larger, faster models may be done in firmware on the low-end models because it is cheaper (and slower).
    Regardless of whether it is direct hardware or firmware it is part of the instruction set on the 360, 370, and later derivatives.
    On the System 10 and ICL 1500 it is the _only_ instruction set and is implemented in hardware.

    Extended instruction sets to handle "Packed Decimal" (a form of BCD)
    still (at the lowest level) manipulated the individual bits because
    that's the only form of "coding" that electricity can recognize (current flows or it doesn't; a core is magnetized North or South).

    Some interesting points have emerged in the course of this argument but
    it really is going nowhere because there is nowhere for it to go.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From robin.vowels@robin.vowels@gmail.com to comp.lang.cobol on Fri Jun 8 20:12:41 2018
    From Newsgroup: comp.lang.cobol

    On Saturday, June 9, 2018 at 5:23:28 AM UTC+10, Richard wrote:
    On Friday, June 8, 2018 at 7:01:10 PM UTC+12, r......@gmail.com wrote:
    On Friday, June 8, 2018 at 1:40:53 PM UTC+10, Richard wrote:
    On Friday, June 8, 2018 at 2:45:47 PM UTC+12, r......@gmail.com wrote:
    On Friday, June 8, 2018 at 6:12:52 AM UTC+10, Richard wrote:
    On Thursday, June 7, 2018 at 8:18:20 PM UTC+12, r......@gmail.com wrote:
    On Thursday, June 7, 2018 at 1:49:40 PM UTC+10, Richard wrote:
    On Thursday, June 7, 2018 at 3:18:55 PM UTC+12, r......@gmail.com wrote:
    On Monday, June 4, 2018 at 6:28:12 AM UTC+10, Richard wrote:

    As I said, the 'value' depends on the representation.
    Many computers have instructions to process BCD or even decimal directly.

    Name one. [Decimal computers went out in the 1950s, or earlier]

    Actually they did not 'go out' in the 50s. The ICL System 25 was decimal machine in the 1980s.

    The ICL System 25 was binary machine, with data held in 8-bit bytes,
    with the least-significant 4 bits holding a BCD value.

    I worked on them when I was at ICL. They followed on from the Singer System 10 (ICL bought SBM). The System 10 had 6 bit characters, for numeric values each held a single decimal character. The System 25 was a more modern implementation with 8bit memory and an extended instruction set. In 'System 10 mode' 6 bits of each byte were used.

    While the lower 4 bits did hold the value the zone bits were set such that the content was actually the display character code of '0' to '9'. The last character of negatives values held an extra zone bit set so they were 'P' to 'Y'.

    In COBOL terms (and there was COBOL for the System 25) these were 'PIC S9(10) DISPLAY' fields that didn't need conversion to another format for arithmetic operations.

    You should know that 'BCD' _is_ Decimal.

    It isn't, and never has been.

    It is in the name, the 'D' stands for Decimal !

    And what do you think 'B' stands for? Could it be BINARY?

    The data is held in BINARY, not decimal.

    If one has a value in a field of, say, 54326 then if the field was binary the "data is held" as '0b000000001101010000110110'. If the field is packed BCD the the "data is held" as '0b000001010010001101100000' which is _completely_ different (but the same numeric value). It may well be that each decimal digit is _coded_ as a binary bit pattern (that is what the acronym specifies) but the "data is held" in DECIMAL and is quite different from the data being held in binary (as the example shows).
    To understand the difference between binary and binary coded decimal,
    you need to read a good computer hardware reference.
    The point about a decimal instruction set in a computer is that it can do arithmetic operations on "data held in" decimal fields
    Such data is held in BINARY.
    without having to convert to "data held in" binary. (the quotes are to emphasize that I use your terminology). The operations are quite different.
    See above.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From robin.vowels@robin.vowels@gmail.com to comp.lang.cobol on Fri Jun 8 20:25:47 2018
    From Newsgroup: comp.lang.cobol

    On Saturday, June 9, 2018 at 12:51:57 PM UTC+10, Richard wrote:
    On Saturday, June 9, 2018 at 12:20:34 PM UTC+12, pete dashwood wrote:
    On 9/06/2018 7:23 AM, Richard wrote:
    <snipped>

    If one has a value in a field of, say, 54326 then if the field was binary the "data is held" as '0b000000001101010000110110'. If the field is packed BCD the the "data is held" as '0b000001010010001101100000' which is _completely_ different (but the same numeric value). It may well be that each decimal digit is _coded_ as a binary bit pattern (that is what the acronym specifies) but the "data is held" in DECIMAL and is quite different from the data being held in binary (as the example shows).

    Can you explain what the significance of the letter b is in the above?

    It stands for 'bits', it is to distinguish it from, say, hex that has a 0x or octal with just a 0.

    Is this hex? (Why would one position in the string be "shorthanded" to hex...?) I don't see how a binary string can contain anything other than 1s or 0s...


    The point about a decimal instruction set in a computer is that it can do arithmetic operations on "data held in" decimal fields without having to convert to "data held in" binary. (the quotes are to emphasize that I use your terminology). The operations are quite different.


    This whole argument comes down to the difference between "coded representation" of binary and "numerical value" of binary.

    "binary value" can only ever be 1 or 0 (Anything other is not, by definition, a binary value.)

    You will notice that the topic is 'Numeric Fields'. The _value_ of a numeric field is most usefully expressed as a number in common notation. They may have a particular bit pattern but expressing that as a string of 1s and 0s is less useful, especially when the bytes may be big-endian, little-endian or big-little-endian.

    As for decimal computers and binary computers, the actual arithmetic is done by hardware units than can ONLY recognize binary (1 digit adders).
    No. That is wrong.
    You are wrong. The statement is correct.
    The hardware unit may work directly on BCD packed or unpacked.
    The data is still ones and zeros, i.e., binary.
    With binary fields the overflow is simply from 1 bit to the next (working from right to left), with BCD fields the overflow has to be done in decimal from one nibble or byte to the next. That is, the hardware recognizes decimal.
    Strict binary (8421) is not the only option for Decimal. I actually have some decimal machines here that use Excess-3 (https://en.wikipedia.org/wiki/Excess-3). These are ICL 1500 Series (not the be confused with the ICT 1500 Series of the early 1960s) that were also from Singer and were originally designed and built by Cogar in the mid 70s and sold by ICL until the early 80s. The replacement was the DRS20 (I have some of those too) with a 'retained mode' processor board (... and those) that was powered by Motorola 2901 'bit-slice' CPUs and also used Excess-3.

    I would suggest that those who insist that 'everything is binary' have no idea about Excess-3 or the other bit-patterns used for decimal.
    Excess-3 and other representations have been around since the 1960s,
    and are well-known. Those conventions faciliated the hardware design
    to deal with carries. But the representations were in binary,
    and the arithmetic was done in binary.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From pete dashwood@dashwood@enternet.co.nz to comp.lang.cobol on Sat Jun 9 16:00:29 2018
    From Newsgroup: comp.lang.cobol

    On 9/06/2018 3:25 PM, robin.vowels@gmail.com wrote:
    On Saturday, June 9, 2018 at 12:51:57 PM UTC+10, Richard wrote:
    On Saturday, June 9, 2018 at 12:20:34 PM UTC+12, pete dashwood wrote:
    On 9/06/2018 7:23 AM, Richard wrote:
    <snipped>

    If one has a value in a field of, say, 54326 then if the field was binary the "data is held" as '0b000000001101010000110110'. If the field is packed BCD the the "data is held" as '0b000001010010001101100000' which is _completely_ different (but the same numeric value). It may well be that each decimal digit is _coded_ as a binary bit pattern (that is what the acronym specifies) but the "data is held" in DECIMAL and is quite different from the data being held in binary (as the example shows).

    Can you explain what the significance of the letter b is in the above?

    It stands for 'bits', it is to distinguish it from, say, hex that has a 0x or octal with just a 0.

    Is this hex? (Why would one position in the string be "shorthanded" to
    hex...?) I don't see how a binary string can contain anything other than >>> 1s or 0s...


    The point about a decimal instruction set in a computer is that it can do arithmetic operations on "data held in" decimal fields without having to convert to "data held in" binary. (the quotes are to emphasize that I use your terminology). The operations are quite different.


    This whole argument comes down to the difference between "coded
    representation" of binary and "numerical value" of binary.

    "binary value" can only ever be 1 or 0 (Anything other is not, by
    definition, a binary value.)

    You will notice that the topic is 'Numeric Fields'. The _value_ of a numeric field is most usefully expressed as a number in common notation. They may have a particular bit pattern but expressing that as a string of 1s and 0s is less useful, especially when the bytes may be big-endian, little-endian or big-little-endian.

    Sure, I understand that.

    As for decimal computers and binary computers, the actual arithmetic is
    done by hardware units than can ONLY recognize binary (1 digit adders).

    No. That is wrong.

    You are wrong. The statement is correct.

    I think it HAS to be done in binary as I noted. However, after reading Richard's response I did some searching and, although I found some very interesting algorithms for converting between binary and BCD (some of
    which could certainly be implemented in hard/firm ware...) I was unable
    to find anything that actually implemented BCD WITHOUT accepting it as
    BINARY digits (bits) and processing it as BINARY digits (at the lowest
    level.) (You can have an INTERFACE to a process that sees it as decimal,
    so anybody dealing with it is effectively dealing with decimal, but when
    it comes to actually manipulating it, it is done one bit at a time, as I noted. I therefore remain unconvinced that my statement was wrong, and
    sand by it. :-))

    The hardware unit may work directly on BCD packed or unpacked.

    The data is still ones and zeros, i.e., binary.

    That is also my understanding of it.

    With binary fields the overflow is simply from 1 bit to the next (working from right to left), with BCD fields the overflow has to be done in decimal from one nibble or byte to the next. That is, the hardware recognizes decimal.

    Strict binary (8421) is not the only option for Decimal. I actually have some decimal machines here that use Excess-3 (https://en.wikipedia.org/wiki/Excess-3). These are ICL 1500 Series (not the be confused with the ICT 1500 Series of the early 1960s) that were also from Singer and were originally designed and built by Cogar in the mid 70s and sold by ICL until the early 80s. The replacement was the DRS20 (I have some of those too) with a 'retained mode' processor board (... and those) that was powered by Motorola 2901 'bit-slice' CPUs and also used Excess-3.

    I would suggest that those who insist that 'everything is binary' have no idea about Excess-3 or the other bit-patterns used for decimal.

    Richard was correct;I have never heard of Excess-3 (despite having
    worked with some of the hardware he mentioned) so I had no idea about
    it. I therefore looked it up and it turns out to be another coding for
    BCD. BUT... it is still based on bits and will be processed at a bit
    level in the heart of the CPU.

    Excess-3 and other representations have been around since the 1960s,
    and are well-known. Those conventions faciliated the hardware design
    to deal with carries. But the representations were in binary,
    and the arithmetic was done in binary.

    I can't find anything that contradicts your statement, so I agree. :-)

    For myself, I don't see this thread really achieving anything.

    If someone wants to think a machine processes decimal or octal or hex or
    base 64, it doesn't really matter, so long as the correct results are obtained.

    I see it as all being binary digits at the hardware level, (the other numbering systems are actually human encoding systems to provide a
    "shorthand" for writing long, tedious, and error-prone, sets of
    bits...), but that may just be the way I choose to visualize it; I would
    not say someone else was "wrong" if they didn't see it that way (as long
    as their code gave the right results :-)).

    The way in which we perceive the amazing machines we work with may well
    be subjective. Fortunately, the languages we use, allow us to
    communicate with other programmers objectively.

    Pete.
    --
    I used to write COBOL; now I can do anything...
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From pete dashwood@dashwood@enternet.co.nz to comp.lang.cobol on Sat Jun 9 16:09:13 2018
    From Newsgroup: comp.lang.cobol

    On 9/06/2018 3:25 PM, robin.vowels@gmail.com wrote:
    <snipped>

    Can you explain what the significance of the letter b is in the above?

    It stands for 'bits', it is to distinguish it from, say, hex that has a 0x or octal with just a 0.

    Thank you, Robin.

    I have never come across that before but it makes sense now you have
    explained it.

    I checked the C# language docs and, sure enough, it is there... I just
    never used it.

    Pete.
    --
    I used to write COBOL; now I can do anything...
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Robert Wessel@robertwessel2@yahoo.com to comp.lang.cobol on Fri Jun 8 23:30:49 2018
    From Newsgroup: comp.lang.cobol

    On Sat, 9 Jun 2018 16:00:29 +1200, pete dashwood
    <dashwood@enternet.co.nz> wrote:

    On 9/06/2018 3:25 PM, robin.vowels@gmail.com wrote:
    On Saturday, June 9, 2018 at 12:51:57 PM UTC+10, Richard wrote:
    On Saturday, June 9, 2018 at 12:20:34 PM UTC+12, pete dashwood wrote:
    On 9/06/2018 7:23 AM, Richard wrote:
    <snipped>

    If one has a value in a field of, say, 54326 then if the field was binary the "data is held" as '0b000000001101010000110110'. If the field is packed BCD the the "data is held" as '0b000001010010001101100000' which is _completely_ different (but the same numeric value). It may well be that each decimal digit is _coded_ as a binary bit pattern (that is what the acronym specifies) but the "data is held" in DECIMAL and is quite different from the data being held in binary (as the example shows).

    Can you explain what the significance of the letter b is in the above?

    It stands for 'bits', it is to distinguish it from, say, hex that has a 0x or octal with just a 0.

    Is this hex? (Why would one position in the string be "shorthanded" to >>>> hex...?) I don't see how a binary string can contain anything other than >>>> 1s or 0s...


    The point about a decimal instruction set in a computer is that it can do arithmetic operations on "data held in" decimal fields without having to convert to "data held in" binary. (the quotes are to emphasize that I use your terminology). The operations are quite different.


    This whole argument comes down to the difference between "coded
    representation" of binary and "numerical value" of binary.

    "binary value" can only ever be 1 or 0 (Anything other is not, by
    definition, a binary value.)

    You will notice that the topic is 'Numeric Fields'. The _value_ of a numeric field is most usefully expressed as a number in common notation. They may have a particular bit pattern but expressing that as a string of 1s and 0s is less useful, especially when the bytes may be big-endian, little-endian or big-little-endian.

    Sure, I understand that.

    As for decimal computers and binary computers, the actual arithmetic is >>>> done by hardware units than can ONLY recognize binary (1 digit adders).

    No. That is wrong.

    You are wrong. The statement is correct.

    I think it HAS to be done in binary as I noted. However, after reading >Richard's response I did some searching and, although I found some very >interesting algorithms for converting between binary and BCD (some of
    which could certainly be implemented in hard/firm ware...) I was unable
    to find anything that actually implemented BCD WITHOUT accepting it as >BINARY digits (bits) and processing it as BINARY digits (at the lowest >level.) (You can have an INTERFACE to a process that sees it as decimal,
    so anybody dealing with it is effectively dealing with decimal, but when
    it comes to actually manipulating it, it is done one bit at a time, as I >noted. I therefore remain unconvinced that my statement was wrong, and
    sand by it. :-))

    The hardware unit may work directly on BCD packed or unpacked.

    The data is still ones and zeros, i.e., binary.

    That is also my understanding of it.

    With binary fields the overflow is simply from 1 bit to the next (working from right to left), with BCD fields the overflow has to be done in decimal from one nibble or byte to the next. That is, the hardware recognizes decimal.

    Strict binary (8421) is not the only option for Decimal. I actually have some decimal machines here that use Excess-3 (https://en.wikipedia.org/wiki/Excess-3). These are ICL 1500 Series (not the be confused with the ICT 1500 Series of the early 1960s) that were also from Singer and were originally designed and built by Cogar in the mid 70s and sold by ICL until the early 80s. The replacement was the DRS20 (I have some of those too) with a 'retained mode' processor board (... and those) that was powered by Motorola 2901 'bit-slice' CPUs and also used Excess-3.

    I would suggest that those who insist that 'everything is binary' have no idea about Excess-3 or the other bit-patterns used for decimal.

    Richard was correct;I have never heard of Excess-3 (despite having
    worked with some of the hardware he mentioned) so I had no idea about
    it. I therefore looked it up and it turns out to be another coding for
    BCD. BUT... it is still based on bits and will be processed at a bit
    level in the heart of the CPU.

    Excess-3 and other representations have been around since the 1960s,
    and are well-known. Those conventions faciliated the hardware design
    to deal with carries. But the representations were in binary,
    and the arithmetic was done in binary.

    I can't find anything that contradicts your statement, so I agree. :-)

    For myself, I don't see this thread really achieving anything.

    If someone wants to think a machine processes decimal or octal or hex or >base 64, it doesn't really matter, so long as the correct results are >obtained.

    I see it as all being binary digits at the hardware level, (the other >numbering systems are actually human encoding systems to provide a >"shorthand" for writing long, tedious, and error-prone, sets of
    bits...), but that may just be the way I choose to visualize it; I would
    not say someone else was "wrong" if they didn't see it that way (as long
    as their code gave the right results :-)).

    The way in which we perceive the amazing machines we work with may well
    be subjective. Fortunately, the languages we use, allow us to
    communicate with other programmers objectively.


    While not particularly popular, (hardware) logic with more than two
    values is possible. The ternary Soviet Setuns are a classic example. Multi-level flash cells are another (although calling that logic is a
    bit of a stretch). Much communications uses multiple levels and
    whatnot to encode data. Some mechanical calculators used 10-position (mechanical) encoding of values. I think some of the plugboard
    machines did likewise (but electrically).

    So it's possible, it's just very rare today and uncommon at best even
    in the past, and of questionable actual value.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From pete dashwood@dashwood@enternet.co.nz to comp.lang.cobol on Sat Jun 9 18:21:25 2018
    From Newsgroup: comp.lang.cobol

    On 9/06/2018 4:30 PM, Robert Wessel wrote:
    On Sat, 9 Jun 2018 16:00:29 +1200, pete dashwood
    <dashwood@enternet.co.nz> wrote:

    <snipped>

    While not particularly popular, (hardware) logic with more than two
    values is possible. The ternary Soviet Setuns are a classic example. Multi-level flash cells are another (although calling that logic is a
    bit of a stretch). Much communications uses multiple levels and
    whatnot to encode data. Some mechanical calculators used 10-position (mechanical) encoding of values. I think some of the plugboard
    machines did likewise (but electrically).

    So it's possible, it's just very rare today and uncommon at best even
    in the past, and of questionable actual value.

    Thanks for that, Robert.

    I have done a fair bit of thinking about this (and researching.)

    It did occur to me that a MECHANICAL solution (like Babbage's
    Analytical engine, or even the 2000 year old antikythera computer) could
    use gears as an analogue of numbers, but I thought if it was electronic,
    it would probably be binary. (Logic 0; logic 1)

    Your Soviet Setuns are new to me but I wouldn't put anything past the Russians... :-)

    Pete.
    --
    I used to write COBOL; now I can do anything...
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From robin.vowels@robin.vowels@gmail.com to comp.lang.cobol on Sat Jun 9 02:08:04 2018
    From Newsgroup: comp.lang.cobol

    On Saturday, June 9, 2018 at 2:30:38 PM UTC+10, robert...@yahoo.com wrote:

    While not particularly popular, (hardware) logic with more than two
    values is possible.

    Each ENIAC digit has ten states, as I mentioned up thread.

    The ternary Soviet Setuns are a classic example.
    Multi-level flash cells are another (although calling that logic is a
    bit of a stretch). Much communications uses multiple levels and
    whatnot to encode data. Some mechanical calculators used 10-position (mechanical) encoding of values. I think some of the plugboard
    machines did likewise (but electrically).

    So it's possible, it's just very rare today and uncommon at best even
    in the past, and of questionable actual value.

    That ENIAC had 20,000 vacuum tubes is testament to the
    complexity of decimal circuitry, compared with around 1500 vacuum tubes
    for binary.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From robin.vowels@robin.vowels@gmail.com to comp.lang.cobol on Sat Jun 9 02:11:03 2018
    From Newsgroup: comp.lang.cobol

    On Saturday, June 9, 2018 at 2:09:19 PM UTC+10, pete dashwood wrote:
    On 9/06/2018 3:25 PM, r......@gmail.com wrote:
    <snipped>

    Can you explain what the significance of the letter b is in the above?

    It stands for 'bits', it is to distinguish it from, say, hex that has a 0x or octal with just a 0.

    Thank you, Robin.

    Richard wrote that, not me.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From pete dashwood@dashwood@enternet.co.nz to comp.lang.cobol on Sun Jun 10 11:56:43 2018
    From Newsgroup: comp.lang.cobol

    On 9/06/2018 9:11 PM, robin.vowels@gmail.com wrote:
    On Saturday, June 9, 2018 at 2:09:19 PM UTC+10, pete dashwood wrote:
    On 9/06/2018 3:25 PM, r......@gmail.com wrote:
    <snipped>

    Can you explain what the significance of the letter b is in the above? >>>>
    It stands for 'bits', it is to distinguish it from, say, hex that has a 0x or octal with just a 0.

    Thank you, Robin.

    Richard wrote that, not me.

    OK, thanks.

    I'm using Thunderbird as the newsreader for this group and when threads
    get very long it can be confusing sometimes.

    Pete.
    --
    I used to write COBOL; now I can do anything...
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From pete dashwood@dashwood@enternet.co.nz to comp.lang.cobol on Sun Jun 10 11:59:29 2018
    From Newsgroup: comp.lang.cobol

    On 9/06/2018 2:51 PM, Richard wrote:
    On Saturday, June 9, 2018 at 12:20:34 PM UTC+12, pete dashwood wrote:
    On 9/06/2018 7:23 AM, Richard wrote:
    <snipped>

    Can you explain what the significance of the letter b is in the above?

    It stands for 'bits', it is to distinguish it from, say, hex that has a 0x or octal with just a 0.

    <snipped>
    Thanks for that, Richard.

    I incorrectly acknowledged it to Robin, sorry.

    I've never come across this but your simple explanation is perfectly clear.

    Pete.
    --
    I used to write COBOL; now I can do anything...
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard@riplin@azonic.co.nz to comp.lang.cobol on Sun Jun 10 18:05:05 2018
    From Newsgroup: comp.lang.cobol

    On Saturday, June 9, 2018 at 3:25:48 PM UTC+12, robin....@gmail.com wrote:
    On Saturday, June 9, 2018 at 12:51:57 PM UTC+10, Richard wrote:
    On Saturday, June 9, 2018 at 12:20:34 PM UTC+12, pete dashwood wrote:
    On 9/06/2018 7:23 AM, Richard wrote:
    <snipped>

    If one has a value in a field of, say, 54326 then if the field was binary the "data is held" as '0b000000001101010000110110'. If the field is packed BCD the the "data is held" as '0b000001010010001101100000' which is _completely_ different (but the same numeric value). It may well be that each decimal digit is _coded_ as a binary bit pattern (that is what the acronym specifies) but the "data is held" in DECIMAL and is quite different from the data being held in binary (as the example shows).

    Can you explain what the significance of the letter b is in the above?

    It stands for 'bits', it is to distinguish it from, say, hex that has a 0x or octal with just a 0.

    Is this hex? (Why would one position in the string be "shorthanded" to hex...?) I don't see how a binary string can contain anything other than 1s or 0s...


    The point about a decimal instruction set in a computer is that it can do arithmetic operations on "data held in" decimal fields without having to convert to "data held in" binary. (the quotes are to emphasize that I use your terminology). The operations are quite different.


    This whole argument comes down to the difference between "coded representation" of binary and "numerical value" of binary.

    "binary value" can only ever be 1 or 0 (Anything other is not, by definition, a binary value.)

    You will notice that the topic is 'Numeric Fields'. The _value_ of a numeric field is most usefully expressed as a number in common notation. They may have a particular bit pattern but expressing that as a string of 1s and 0s is less useful, especially when the bytes may be big-endian, little-endian or big-little-endian.

    As for decimal computers and binary computers, the actual arithmetic is done by hardware units than can ONLY recognize binary (1 digit adders).

    No. That is wrong.

    You are wrong. The statement is correct.

    The hardware unit may work directly on BCD packed or unpacked.

    The data is still ones and zeros, i.e., binary.

    What you have completely failed to notice is that the topic is "Numeric _fields_". While a bit may be just a 0 or a 1 and thus binary, a field has many bits and the value contained depends on how those bits are arranged in patterns to represent the value. In COBOL the field specification is in the picture. A binary field could be defined as COMP-5 while a BCD would be COMP-3 and an ASCII or EBCDIC (Extended Binary Coded Decimal Interchange Code - note: Coded and Code) will be DISPLAY NUMERIC.
    With binary fields the overflow is simply from 1 bit to the next (working from right to left), with BCD fields the overflow has to be done in decimal from one nibble or byte to the next. That is, the hardware recognizes decimal.

    Strict binary (8421) is not the only option for Decimal. I actually have some decimal machines here that use Excess-3 (https://en.wikipedia.org/wiki/Excess-3). These are ICL 1500 Series (not the be confused with the ICT 1500 Series of the early 1960s) that were also from Singer and were originally designed and built by Cogar in the mid 70s and sold by ICL until the early 80s. The replacement was the DRS20 (I have some of those too) with a 'retained mode' processor board (... and those) that was powered by Motorola 2901 'bit-slice' CPUs and also used Excess-3.

    I would suggest that those who insist that 'everything is binary' have no idea about Excess-3 or the other bit-patterns used for decimal.

    Excess-3 and other representations have been around since the 1960s,
    and are well-known. Those conventions faciliated the hardware design
    to deal with carries. But the representations were in binary,
    and the arithmetic was done in binary.
    Actually Excess-3 has been around since the 1930s. And, no, the arithmetic was _not_ done 'in binary'. The rules for binary arithmetic and those for decimal arithmetic are somewhat different and, as you say, the hardware for each is different.
    Just because each individual bit is binary does not make all operations on a set of bits 'binary'. While in binary a set of 4 bits can have 16 different values in decimal those 4 bits (or 6 or 8) can have just 10 different values.
    While there are processors that only deal with a single bit at a time, most computers process arithmetic instructions 16, 32 or 64 bits at one time. In order to get the correct result these need to handle overflow from one unit to another in a specified way. Thus a decimal machine or an Excess-3 machine has hardware that does it differently from the way a pure binary processor would.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From robin.vowels@robin.vowels@gmail.com to comp.lang.cobol on Sun Jun 10 19:28:20 2018
    From Newsgroup: comp.lang.cobol

    On Monday, June 11, 2018 at 11:05:06 AM UTC+10, Richard wrote:

    As for decimal computers and binary computers, the actual arithmetic is
    done by hardware units than can ONLY recognize binary (1 digit adders).

    No. That is wrong.

    You are wrong. The statement is correct.

    The hardware unit may work directly on BCD packed or unpacked.

    The data is still ones and zeros, i.e., binary.
    What you have completely failed to notice is that the topic is "Numeric _fields_". While a bit may be just a 0 or a 1 and thus binary, a field has many bits and the value contained depends on how those bits are arranged in patterns to represent the value. In COBOL the field specification is in the picture. A binary field could be defined as COMP-5 while a BCD would be COMP-3 and an ASCII or EBCDIC (Extended Binary Coded Decimal Interchange Code - note: Coded and Code) will be DISPLAY NUMERIC.
    What you have failed to notice is that the topic is "Binary and BCD",
    and not what you say.
    With binary fields the overflow is simply from 1 bit to the next (working from right to left), with BCD fields the overflow has to be done in decimal from one nibble or byte to the next. That is, the hardware recognizes decimal.
    It doesn't. Decimal arithmetic can be done using only instructions
    that operate on binary values, including addition, and logical operations. Decimal hardware is not required.
    Strict binary (8421) is not the only option for Decimal. I actually have some decimal machines here that use Excess-3 (https://en.wikipedia.org/wiki/Excess-3). These are ICL 1500 Series (not the be confused with the ICT 1500 Series of the early 1960s) that were also from Singer and were originally designed and built by Cogar in the mid 70s and sold by ICL until the early 80s. The replacement was the DRS20 (I have some of those too) with a 'retained mode' processor board (... and those) that was powered by Motorola 2901 'bit-slice' CPUs and also used Excess-3.

    I would suggest that those who insist that 'everything is binary' have no idea about Excess-3 or the other bit-patterns used for decimal.

    Excess-3 and other representations have been around since the 1960s,
    and are well-known. Those conventions faciliated the hardware design
    to deal with carries. But the representations were in binary,
    and the arithmetic was done in binary.

    Actually Excess-3 has been around since the 1930s. And, no, the arithmetic was _not_ done 'in binary'. The rules for binary arithmetic and those for decimal arithmetic are somewhat different and, as you say, the hardware for each is different.

    Just because each individual bit is binary does not make all operations on a set of bits 'binary'. While in binary a set of 4 bits can have 16 different values in decimal those 4 bits (or 6 or 8) can have just 10 different values.

    While there are processors that only deal with a single bit at a time, most computers process arithmetic instructions 16, 32 or 64 bits at one time. In order to get the correct result these need to handle overflow from one unit to another in a specified way. Thus a decimal machine or an Excess-3 machine has hardware that does it differently from the way a pure binary processor would.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard@riplin@azonic.co.nz to comp.lang.cobol on Sun Jun 10 22:10:11 2018
    From Newsgroup: comp.lang.cobol

    On Monday, June 11, 2018 at 2:28:21 PM UTC+12, robin....@gmail.com wrote:
    On Monday, June 11, 2018 at 11:05:06 AM UTC+10, Richard wrote:

    As for decimal computers and binary computers, the actual arithmetic is
    done by hardware units than can ONLY recognize binary (1 digit adders).

    No. That is wrong.

    You are wrong. The statement is correct.

    The hardware unit may work directly on BCD packed or unpacked.

    The data is still ones and zeros, i.e., binary.

    What you have completely failed to notice is that the topic is "Numeric _fields_". While a bit may be just a 0 or a 1 and thus binary, a field has many bits and the value contained depends on how those bits are arranged in patterns to represent the value. In COBOL the field specification is in the picture. A binary field could be defined as COMP-5 while a BCD would be COMP-3 and an ASCII or EBCDIC (Extended Binary Coded Decimal Interchange Code - note: Coded and Code) will be DISPLAY NUMERIC.

    What you have failed to notice is that the topic is "Binary and BCD",
    and not what you say.
    Look at the top of the page:
    """Numeric fields Fujitsu/MF"""

    With binary fields the overflow is simply from 1 bit to the next (working from right to left), with BCD fields the overflow has to be done in decimal from one nibble or byte to the next. That is, the hardware recognizes decimal.

    It doesn't. Decimal arithmetic can be done using only instructions
    that operate on binary values, including addition, and logical operations. Decimal hardware is not required.
    Wow. Who needs hardware designers when we have you around !!!

    Strict binary (8421) is not the only option for Decimal. I actually have some decimal machines here that use Excess-3 (https://en.wikipedia.org/wiki/Excess-3). These are ICL 1500 Series (not the be confused with the ICT 1500 Series of the early 1960s) that were also from Singer and were originally designed and built by Cogar in the mid 70s and sold by ICL until the early 80s. The replacement was the DRS20 (I have some of those too) with a 'retained mode' processor board (... and those) that was powered by Motorola 2901 'bit-slice' CPUs and also used Excess-3.

    I would suggest that those who insist that 'everything is binary' have no idea about Excess-3 or the other bit-patterns used for decimal.

    Excess-3 and other representations have been around since the 1960s,
    and are well-known. Those conventions faciliated the hardware design
    to deal with carries. But the representations were in binary,
    and the arithmetic was done in binary.

    Actually Excess-3 has been around since the 1930s. And, no, the arithmetic was _not_ done 'in binary'. The rules for binary arithmetic and those for decimal arithmetic are somewhat different and, as you say, the hardware for each is different.

    Just because each individual bit is binary does not make all operations on a set of bits 'binary'. While in binary a set of 4 bits can have 16 different values in decimal those 4 bits (or 6 or 8) can have just 10 different values.

    While there are processors that only deal with a single bit at a time, most computers process arithmetic instructions 16, 32 or 64 bits at one time. In order to get the correct result these need to handle overflow from one unit to another in a specified way. Thus a decimal machine or an Excess-3 machine has hardware that does it differently from the way a pure binary processor would.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From robin.vowels@robin.vowels@gmail.com to comp.lang.cobol on Mon Jun 11 02:57:06 2018
    From Newsgroup: comp.lang.cobol

    On Monday, June 11, 2018 at 3:10:12 PM UTC+10, Richard wrote:
    On Monday, June 11, 2018 at 2:28:21 PM UTC+12, robin....@gmail.com wrote:
    On Monday, June 11, 2018 at 11:05:06 AM UTC+10, Richard wrote:

    As for decimal computers and binary computers, the actual arithmetic is
    done by hardware units than can ONLY recognize binary (1 digit adders).

    No. That is wrong.

    You are wrong. The statement is correct.

    The hardware unit may work directly on BCD packed or unpacked.

    The data is still ones and zeros, i.e., binary.

    What you have completely failed to notice is that the topic is "Numeric _fields_". While a bit may be just a 0 or a 1 and thus binary, a field has many bits and the value contained depends on how those bits are arranged in patterns to represent the value. In COBOL the field specification is in the picture. A binary field could be defined as COMP-5 while a BCD would be COMP-3 and an ASCII or EBCDIC (Extended Binary Coded Decimal Interchange Code - note: Coded and Code) will be DISPLAY NUMERIC.

    What you have failed to notice is that the topic is "Binary and BCD",
    and not what you say.

    Look at the top of the page:

    """Numeric fields Fujitsu/MF"""
    I did. It still says "Binary and BCD".
    In any case, in the original title, there's nothing about COBOL.
    Can apply to any language and any hardware.
    With binary fields the overflow is simply from 1 bit to the next (working from right to left), with BCD fields the overflow has to be done in decimal from one nibble or byte to the next. That is, the hardware recognizes decimal.

    It doesn't. Decimal arithmetic can be done using only instructions
    that operate on binary values, including addition, and logical operations. Decimal hardware is not required.
    Wow. Who needs hardware designers when we have you around !!!
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Bill Gunshannon@bill.gunshannon@gmail.com to comp.lang.cobol on Mon Jun 11 08:28:46 2018
    From Newsgroup: comp.lang.cobol

    On 06/11/2018 01:10 AM, Richard wrote:
    On Monday, June 11, 2018 at 2:28:21 PM UTC+12, robin....@gmail.com wrote:
    On Monday, June 11, 2018 at 11:05:06 AM UTC+10, Richard wrote:

    As for decimal computers and binary computers, the actual arithmetic is >>>>>> done by hardware units than can ONLY recognize binary (1 digit adders). >>>>
    No. That is wrong.

    You are wrong. The statement is correct.

    The hardware unit may work directly on BCD packed or unpacked.

    The data is still ones and zeros, i.e., binary.

    What you have completely failed to notice is that the topic is "Numeric _fields_". While a bit may be just a 0 or a 1 and thus binary, a field has many bits and the value contained depends on how those bits are arranged in patterns to represent the value. In COBOL the field specification is in the picture. A binary field could be defined as COMP-5 while a BCD would be COMP-3 and an ASCII or EBCDIC (Extended Binary Coded Decimal Interchange Code - note: Coded and Code) will be DISPLAY NUMERIC.

    What you have failed to notice is that the topic is "Binary and BCD",
    and not what you say.

    Look at the top of the page:

    """Numeric fields Fujitsu/MF"""


    With binary fields the overflow is simply from 1 bit to the next (working from right to left), with BCD fields the overflow has to be done in decimal from one nibble or byte to the next. That is, the hardware recognizes decimal.

    It doesn't. Decimal arithmetic can be done using only instructions
    that operate on binary values, including addition, and logical operations. >> Decimal hardware is not required.


    Wow. Who needs hardware designers when we have you around !!!


    Never try to teach a pig to sing. It can't be done and
    it annoys the pig.

    bill

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Rick Smith@rs847925@gmail.com to comp.lang.cobol on Mon Jun 11 07:51:21 2018
    From Newsgroup: comp.lang.cobol

    On Friday, June 1, 2018 at 5:19:51 PM UTC-4, Richard wrote:
    On Tuesday, December 12, 2017 at 1:50:21 AM UTC+13, JM wrote:
    01 aa pic s999 value -123.


    In Fjitsu the field "aa" has the value in HEXA of "31 32 53" in MF the same field in HEXA has the following value "31 32 4C".
    There is some switch/command for fujitsu has the same behavior as MF

    The decimal representation for negative numbers is different between the two compilers, and and may be different again for other compilers. This can be overcome by using SIGN SEPARATE.

    The word "decimal" is redundant to the use of "negative". Neither binary
    nor hexadecimal can be negative (signed).

    For any to argue that "decimal" should be "binary" is just plain silly!
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard@riplin@azonic.co.nz to comp.lang.cobol on Mon Jun 11 13:48:54 2018
    From Newsgroup: comp.lang.cobol

    On Monday, June 11, 2018 at 9:57:07 PM UTC+12, robin....@gmail.com wrote:
    On Monday, June 11, 2018 at 3:10:12 PM UTC+10, Richard wrote:
    On Monday, June 11, 2018 at 2:28:21 PM UTC+12, robin....@gmail.com wrote:
    On Monday, June 11, 2018 at 11:05:06 AM UTC+10, Richard wrote:

    As for decimal computers and binary computers, the actual arithmetic is
    done by hardware units than can ONLY recognize binary (1 digit adders).

    No. That is wrong.

    You are wrong. The statement is correct.

    The hardware unit may work directly on BCD packed or unpacked.

    The data is still ones and zeros, i.e., binary.

    What you have completely failed to notice is that the topic is "Numeric _fields_". While a bit may be just a 0 or a 1 and thus binary, a field has many bits and the value contained depends on how those bits are arranged in patterns to represent the value. In COBOL the field specification is in the picture. A binary field could be defined as COMP-5 while a BCD would be COMP-3 and an ASCII or EBCDIC (Extended Binary Coded Decimal Interchange Code - note: Coded and Code) will be DISPLAY NUMERIC.

    What you have failed to notice is that the topic is "Binary and BCD",
    and not what you say.

    Look at the top of the page:

    """Numeric fields Fujitsu/MF"""

    I did. It still says "Binary and BCD".

    You are confused. When going into comp.lang.cobol you get a list of "Topics". The "topic" for this is "Numeric fields Fujitsu/MF" and this appears at the top of the webpage. When posting a message or reply it is possible to change the _subject_ (which is not the topic).
    The Topic is "Numeric fields Fujitsu/MF",
    The _sub_ject (or sub-topic) is "Binary and BCD".
    In any case, in the original title, there's nothing about COBOL.
    Can apply to any language and any hardware.
    This is in comp.lang.cobol, so it is _all_ about COBOL.
    Also 'Fujitsu' and 'MF' (Microfocus) refer to COBOL compilers. You may not have known that.

    With binary fields the overflow is simply from 1 bit to the next (working from right to left), with BCD fields the overflow has to be done in decimal from one nibble or byte to the next. That is, the hardware recognizes decimal.

    It doesn't. Decimal arithmetic can be done using only instructions
    that operate on binary values, including addition, and logical operations.
    Decimal hardware is not required.

    Wow. Who needs hardware designers when we have you around !!!
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Doug Miller@doug_at_milmac_dot_com@example.com to comp.lang.cobol on Wed Jun 20 15:13:43 2018
    From Newsgroup: comp.lang.cobol

    Richard <riplin@azonic.co.nz> wrote in news:51247ae4-5f99-463f-ae14-de6207c95bd0@googlegroups.com:


    HEX "31 32 53" (or 4C) are the ASCII values for the decimal
    characters '1', '2' and '3' with an extra bit set for negative.

    No, they are not.

    The third byte, if positive, would be 33 = 0110 0110

    *Neither* 4C nor 53 is 33 "with an extra bit set for negative."
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard@riplin@azonic.co.nz to comp.lang.cobol on Wed Jun 20 19:54:15 2018
    From Newsgroup: comp.lang.cobol

    On Thursday, June 21, 2018 at 3:13:44 AM UTC+12, Doug Miller wrote:
    Richard <riplin@azonic.co.nz> wrote in news:51247ae4-5f99-463f-ae14-de6207c95bd0@googlegroups.com:


    HEX "31 32 53" (or 4C) are the ASCII values for the decimal
    characters '1', '2' and '3' with an extra bit set for negative.

    No, they are not.

    The third byte, if positive, would be 33 = 0110 0110

    *Neither* 4C nor 53 is 33 "with an extra bit set for negative."
    That is what happens when I take someone's word for what the codes instead of testing it myself. Microfocus does indicate "with an extra bit set for negative.". According to Microfocus Compatibility Guide the representation for +123 in a numeric display S9(3) field is x31 x32 x33 and for -123 is x31 x32 x73 (not x53).
    Fujitsu has a different sign byte depending on whether the field is signed or not:
    1234 PIC 9(4) x31 x32 x33 x34
    1234 PIC S9(4) x31 x32 x33 x44
    -1234 PIC S9(4) x31 x32 x33 x54
    So for an S9(x) DISPLAY it does have "an extra bit set for negative."
    RM COBOL uses 'P' to 'Y' in the last byte to indicate negative which is where the x4C seems to come from.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Doug Miller@doug_at_milmac_dot_com@example.com to comp.lang.cobol on Thu Jun 21 17:28:39 2018
    From Newsgroup: comp.lang.cobol

    Richard <riplin@azonic.co.nz> wrote in news:74bd04a6-29a8-419a-9042-3abd7a54463c@googlegroups.com:

    On Thursday, June 21, 2018 at 3:13:44 AM UTC+12, Doug Miller
    wrote:
    Richard <riplin@azonic.co.nz> wrote in
    news:51247ae4-5f99-463f-ae14-de6207c95bd0@googlegroups.com:


    HEX "31 32 53" (or 4C) are the ASCII values for the decimal
    characters '1', '2' and '3' with an extra bit set for
    negative.

    No, they are not.

    The third byte, if positive, would be 33 = 0110 0110

    *Neither* 4C nor 53 is 33 "with an extra bit set for negative."

    That is what happens when I take someone's word for what the
    codes instead of testing it myself. Microfocus does indicate
    "with an extra bit set for negative.". According to Microfocus
    Compatibility Guide the representation for +123 in a numeric
    display S9(3) field is x31 x32 x33 and for -123 is x31 x32 x73
    (not x53).

    0x73, I'll buy. 0x53, not so much.

    Fujitsu has a different sign byte depending on whether the field
    is signed or not:

    1234 PIC 9(4) x31 x32 x33 x34
    1234 PIC S9(4) x31 x32 x33 x44

    Really? It changes the high-order nybble of the last byte from 0011 to 0100, flipping *three*
    bits to indicate a positive sign? Color me just a little bit skeptical here.

    -1234 PIC S9(4) x31 x32 x33 x54

    And a negative sign is indicated by flipping *two* bits? (0011.... -> 0101....)

    So for an S9(x) DISPLAY it does have "an extra bit set for
    negative."

    Setting an extra bit won't change 0x33 into 0x53. That's a change of *two* bits in the high-
    order nybble, from 0011 to 0101.

    RM COBOL uses 'P' to 'Y' in the last byte to indicate negative

    You sure about that? The range P..Y is 0x50..0x59; changing 0x3-whatever to 0x8-whatever
    is a change of *two* bits, not one.

    which is where the x4C seems to come from.

    Except that 0x4C is L...




    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard@riplin@azonic.co.nz to comp.lang.cobol on Thu Jun 21 13:21:55 2018
    From Newsgroup: comp.lang.cobol

    On Friday, June 22, 2018 at 5:28:40 AM UTC+12, Doug Miller wrote:
    Richard <riplin@azonic.co.nz> wrote in news:74bd04a6-29a8-419a-9042-3abd7a54463c@googlegroups.com:

    On Thursday, June 21, 2018 at 3:13:44 AM UTC+12, Doug Miller
    wrote:
    Richard <riplin@azonic.co.nz> wrote in
    news:51247ae4-5f99-463f-ae14-de6207c95bd0@googlegroups.com:


    HEX "31 32 53" (or 4C) are the ASCII values for the decimal
    characters '1', '2' and '3' with an extra bit set for
    negative.

    No, they are not.

    The third byte, if positive, would be 33 = 0110 0110

    *Neither* 4C nor 53 is 33 "with an extra bit set for negative."

    That is what happens when I take someone's word for what the
    codes instead of testing it myself. Microfocus does indicate
    "with an extra bit set for negative.". According to Microfocus Compatibility Guide the representation for +123 in a numeric
    display S9(3) field is x31 x32 x33 and for -123 is x31 x32 x73
    (not x53).

    0x73, I'll buy. 0x53, not so much.

    Fujitsu has a different sign byte depending on whether the field
    is signed or not:

    1234 PIC 9(4) x31 x32 x33 x34
    1234 PIC S9(4) x31 x32 x33 x44

    Really? It changes the high-order nybble of the last byte from 0011 to 0100, flipping *three*
    bits to indicate a positive sign? Color me just a little bit skeptical here.

    It does change anything. A PIC 9(3) field cannot be negative. The final byte will always be x3n where n = 0 to 9. A PIC S9(3) will have the first or last byte as the sign (LEADING or TRAILING) and this will always be x4n or x5n depending on whether the value is positive or negative.
    The representation of an NUMERIC DISPLAY UNSIGNED is different than that of a SIGNED. This is also true for some compilers' BCD final byte (unpacked) or nibble (where packed) where the unsigned field bit pattern will have one particular value and signed fields will have either of two values different from that of the unsigned.
    -1234 PIC S9(4) x31 x32 x33 x54

    See table on pages 186-187 in Fujitsu COBOL Languge Ref: http://software.fujitsu.com/jp/manual/manualfiles/m140018/b1wd3304/02enz000/b1wd-3304-02enz0.pdf#IDlangref-ch05N146
    The first box is for DISPLAY, where I got the examples. The final box of that table is PACKED-DECIMAL illustrating the final nibble to be F, C or D.
    And a negative sign is indicated by flipping *two* bits? (0011.... -> 0101....)
    No. It sets an extra single bit going from x4n to x5n (0100.... -> 0101....). A field with a final byte of x3n will never be changed because the field is unsigned.

    So for an S9(x) DISPLAY it does have "an extra bit set for
    negative."

    Setting an extra bit won't change 0x33 into 0x53. That's a change of *two* bits in the high-
    order nybble, from 0011 to 0101.

    RM COBOL uses 'P' to 'Y' in the last byte to indicate negative

    You sure about that? The range P..Y is 0x50..0x59; changing 0x3-whatever to 0x8-whatever
    is a change of *two* bits, not one.

    which is where the x4C seems to come from.

    Except that 0x4C is L...
    'P' to 'Y' makes more sense than 'I' to 'R' (x49 to x52). I suspect that the x4C is just wrong even though it has this in the Microfocus Converting manual.
    Go to "DISPLAY Data" then "RM Representation (Hexadecimal)" in: https://www.microfocus.co.jp/manuals/SE40/cgrmdf.htm
    Note that +123 has the sign nibble as x43. I think that -123 should have x53 (one bit set) which is 'S' and in the range 'P' to 'Y'. I don't have an RM COBOL system so I can't test this.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard@riplin@azonic.co.nz to comp.lang.cobol on Thu Jun 21 14:11:52 2018
    From Newsgroup: comp.lang.cobol

    On Friday, June 22, 2018 at 5:28:40 AM UTC+12, Doug Miller wrote:
    Richard <riplin@azonic.co.nz> wrote in news:74bd04a6-29a8-419a-9042-3abd7a54463c@googlegroups.com:

    On Thursday, June 21, 2018 at 3:13:44 AM UTC+12, Doug Miller
    wrote:
    Richard <riplin@azonic.co.nz> wrote in
    news:51247ae4-5f99-463f-ae14-de6207c95bd0@googlegroups.com:


    HEX "31 32 53" (or 4C) are the ASCII values for the decimal
    characters '1', '2' and '3' with an extra bit set for
    negative.

    No, they are not.

    The third byte, if positive, would be 33 = 0110 0110

    *Neither* 4C nor 53 is 33 "with an extra bit set for negative."

    That is what happens when I take someone's word for what the
    codes instead of testing it myself. Microfocus does indicate
    "with an extra bit set for negative.". According to Microfocus Compatibility Guide the representation for +123 in a numeric
    display S9(3) field is x31 x32 x33 and for -123 is x31 x32 x73
    (not x53).

    0x73, I'll buy. 0x53, not so much.

    Fujitsu has a different sign byte depending on whether the field
    is signed or not:

    1234 PIC 9(4) x31 x32 x33 x34
    1234 PIC S9(4) x31 x32 x33 x44

    Really? It changes the high-order nybble of the last byte from 0011 to 0100, flipping *three*
    bits to indicate a positive sign? Color me just a little bit skeptical here.

    -1234 PIC S9(4) x31 x32 x33 x54

    And a negative sign is indicated by flipping *two* bits? (0011.... -> 0101....)

    So for an S9(x) DISPLAY it does have "an extra bit set for
    negative."

    Setting an extra bit won't change 0x33 into 0x53. That's a change of *two* bits in the high-
    order nybble, from 0011 to 0101.

    RM COBOL uses 'P' to 'Y' in the last byte to indicate negative

    You sure about that? The range P..Y is 0x50..0x59; changing 0x3-whatever to 0x8-whatever
    is a change of *two* bits, not one.

    which is where the x4C seems to come from.

    Except that 0x4C is L...
    I have located in my stash of old manuals and stuff "Structured COBOL. Fundamentals and Style", Welburn and Price 1995, which uses RM COBOL-85 and even has a student edition compiler in an accompanying book.
    This book has a table that indicates how negative numbers are stored in a DISPLAY NUMERIC field on an IBM Mainframe. The sign byte is from '}' to 'R' which in EBCDIC is xD0 to xD9. When converted as characters to ASCII a negative 3 would be 'L' which is x4C in ASCII. This is probably why the MF Converting manual has x4C in the table, it assumes converting from EBCDIC to ASCII and then to MF data format. This manual is probably where JM got the idea that this was used as negative 3, though he said it was for Microfocus, which I am sure is wrong.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Rick Smith@rs847925@gmail.com to comp.lang.cobol on Thu Jun 21 14:20:37 2018
    From Newsgroup: comp.lang.cobol

    On Thursday, June 21, 2018 at 5:11:53 PM UTC-4, Richard wrote:
    On Friday, June 22, 2018 at 5:28:40 AM UTC+12, Doug Miller wrote:
    Richard <riplin@azonic.co.nz> wrote in news:74bd04a6-29a8-419a-9042-3abd7a54463c@googlegroups.com:

    On Thursday, June 21, 2018 at 3:13:44 AM UTC+12, Doug Miller
    wrote:
    Richard <riplin@azonic.co.nz> wrote in
    news:51247ae4-5f99-463f-ae14-de6207c95bd0@googlegroups.com:


    HEX "31 32 53" (or 4C) are the ASCII values for the decimal
    characters '1', '2' and '3' with an extra bit set for
    negative.

    No, they are not.

    The third byte, if positive, would be 33 = 0110 0110

    *Neither* 4C nor 53 is 33 "with an extra bit set for negative."

    That is what happens when I take someone's word for what the
    codes instead of testing it myself. Microfocus does indicate
    "with an extra bit set for negative.". According to Microfocus Compatibility Guide the representation for +123 in a numeric
    display S9(3) field is x31 x32 x33 and for -123 is x31 x32 x73
    (not x53).

    0x73, I'll buy. 0x53, not so much.

    Fujitsu has a different sign byte depending on whether the field
    is signed or not:

    1234 PIC 9(4) x31 x32 x33 x34
    1234 PIC S9(4) x31 x32 x33 x44

    Really? It changes the high-order nybble of the last byte from 0011 to 0100, flipping *three*
    bits to indicate a positive sign? Color me just a little bit skeptical here.

    -1234 PIC S9(4) x31 x32 x33 x54

    And a negative sign is indicated by flipping *two* bits? (0011.... -> 0101....)

    So for an S9(x) DISPLAY it does have "an extra bit set for
    negative."

    Setting an extra bit won't change 0x33 into 0x53. That's a change of *two* bits in the high-
    order nybble, from 0011 to 0101.

    RM COBOL uses 'P' to 'Y' in the last byte to indicate negative

    You sure about that? The range P..Y is 0x50..0x59; changing 0x3-whatever to 0x8-whatever
    is a change of *two* bits, not one.

    which is where the x4C seems to come from.

    Except that 0x4C is L...

    I have located in my stash of old manuals and stuff "Structured COBOL. Fundamentals and Style", Welburn and Price 1995, which uses RM COBOL-85 and even has a student edition compiler in an accompanying book.

    This book has a table that indicates how negative numbers are stored in a DISPLAY NUMERIC field on an IBM Mainframe. The sign byte is from '}' to 'R' which in EBCDIC is xD0 to xD9. When converted as characters to ASCII a negative 3 would be 'L' which is x4C in ASCII. This is probably why the MF Converting manual has x4C in the table, it assumes converting from EBCDIC to ASCII and then to MF data format. This manual is probably where JM got the idea that this was used as negative 3, though he said it was for Microfocus, which I am sure is wrong.
    Directives CHARSET"ASCII" SIGN"EBCDIC"
    < http://documentation.microfocus.com/help/index.jsp?topic=%2FGUID-0E0191D8-C39A-44D1-BA4C-D67107BAF784%2FHRCDRHCDIR1T.html >
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Rick Smith@rs847925@gmail.com to comp.lang.cobol on Thu Jun 21 14:41:23 2018
    From Newsgroup: comp.lang.cobol

    On Thursday, June 21, 2018 at 5:20:38 PM UTC-4, Rick Smith wrote:
    On Thursday, June 21, 2018 at 5:11:53 PM UTC-4, Richard wrote:
    On Friday, June 22, 2018 at 5:28:40 AM UTC+12, Doug Miller wrote:
    Richard <riplin@azonic.co.nz> wrote in news:74bd04a6-29a8-419a-9042-3abd7a54463c@googlegroups.com:

    On Thursday, June 21, 2018 at 3:13:44 AM UTC+12, Doug Miller
    wrote:
    Richard <riplin@azonic.co.nz> wrote in
    news:51247ae4-5f99-463f-ae14-de6207c95bd0@googlegroups.com:


    HEX "31 32 53" (or 4C) are the ASCII values for the decimal
    characters '1', '2' and '3' with an extra bit set for
    negative.

    No, they are not.

    The third byte, if positive, would be 33 = 0110 0110

    *Neither* 4C nor 53 is 33 "with an extra bit set for negative."

    That is what happens when I take someone's word for what the
    codes instead of testing it myself. Microfocus does indicate
    "with an extra bit set for negative.". According to Microfocus Compatibility Guide the representation for +123 in a numeric
    display S9(3) field is x31 x32 x33 and for -123 is x31 x32 x73
    (not x53).

    0x73, I'll buy. 0x53, not so much.

    Fujitsu has a different sign byte depending on whether the field
    is signed or not:

    1234 PIC 9(4) x31 x32 x33 x34
    1234 PIC S9(4) x31 x32 x33 x44

    Really? It changes the high-order nybble of the last byte from 0011 to 0100, flipping *three*
    bits to indicate a positive sign? Color me just a little bit skeptical here.

    -1234 PIC S9(4) x31 x32 x33 x54

    And a negative sign is indicated by flipping *two* bits? (0011.... -> 0101....)

    So for an S9(x) DISPLAY it does have "an extra bit set for
    negative."

    Setting an extra bit won't change 0x33 into 0x53. That's a change of *two* bits in the high-
    order nybble, from 0011 to 0101.

    RM COBOL uses 'P' to 'Y' in the last byte to indicate negative

    You sure about that? The range P..Y is 0x50..0x59; changing 0x3-whatever to 0x8-whatever
    is a change of *two* bits, not one.

    which is where the x4C seems to come from.

    Except that 0x4C is L...

    I have located in my stash of old manuals and stuff "Structured COBOL. Fundamentals and Style", Welburn and Price 1995, which uses RM COBOL-85 and even has a student edition compiler in an accompanying book.

    This book has a table that indicates how negative numbers are stored in a DISPLAY NUMERIC field on an IBM Mainframe. The sign byte is from '}' to 'R' which in EBCDIC is xD0 to xD9. When converted as characters to ASCII a negative 3 would be 'L' which is x4C in ASCII. This is probably why the MF Converting manual has x4C in the table, it assumes converting from EBCDIC to ASCII and then to MF data format. This manual is probably where JM got the idea that this was used as negative 3, though he said it was for Microfocus, which I am sure is wrong.

    Directives CHARSET"ASCII" SIGN"EBCDIC"
    < http://documentation.microfocus.com/help/index.jsp?topic=%2FGUID-0E0191D8-C39A-44D1-BA4C-D67107BAF784%2FHRCDRHCDIR1T.html >
    Oops! Should be
    < http://documentation.microfocus.com/help/topic/GUID-0E0191D8-C39A-44D1-BA4C-D67107BAF784/HRLHLHCLANU933.html?cp=6_7_5_1_2_1_5_3_0 >
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard@riplin@azonic.co.nz to comp.lang.cobol on Thu Jun 21 15:52:11 2018
    From Newsgroup: comp.lang.cobol

    On Friday, June 22, 2018 at 9:41:25 AM UTC+12, Rick Smith wrote:
    On Thursday, June 21, 2018 at 5:20:38 PM UTC-4, Rick Smith wrote:
    On Thursday, June 21, 2018 at 5:11:53 PM UTC-4, Richard wrote:
    On Friday, June 22, 2018 at 5:28:40 AM UTC+12, Doug Miller wrote:
    Richard <riplin@azonic.co.nz> wrote in news:74bd04a6-29a8-419a-9042-3abd7a54463c@googlegroups.com:

    On Thursday, June 21, 2018 at 3:13:44 AM UTC+12, Doug Miller
    wrote:
    Richard <riplin@azonic.co.nz> wrote in
    news:51247ae4-5f99-463f-ae14-de6207c95bd0@googlegroups.com:


    HEX "31 32 53" (or 4C) are the ASCII values for the decimal
    characters '1', '2' and '3' with an extra bit set for
    negative.

    No, they are not.

    The third byte, if positive, would be 33 = 0110 0110

    *Neither* 4C nor 53 is 33 "with an extra bit set for negative."

    That is what happens when I take someone's word for what the
    codes instead of testing it myself. Microfocus does indicate
    "with an extra bit set for negative.". According to Microfocus Compatibility Guide the representation for +123 in a numeric
    display S9(3) field is x31 x32 x33 and for -123 is x31 x32 x73
    (not x53).

    0x73, I'll buy. 0x53, not so much.

    Fujitsu has a different sign byte depending on whether the field
    is signed or not:

    1234 PIC 9(4) x31 x32 x33 x34
    1234 PIC S9(4) x31 x32 x33 x44

    Really? It changes the high-order nybble of the last byte from 0011 to 0100, flipping *three*
    bits to indicate a positive sign? Color me just a little bit skeptical here.

    -1234 PIC S9(4) x31 x32 x33 x54

    And a negative sign is indicated by flipping *two* bits? (0011.... -> 0101....)

    So for an S9(x) DISPLAY it does have "an extra bit set for
    negative."

    Setting an extra bit won't change 0x33 into 0x53. That's a change of *two* bits in the high-
    order nybble, from 0011 to 0101.

    RM COBOL uses 'P' to 'Y' in the last byte to indicate negative

    You sure about that? The range P..Y is 0x50..0x59; changing 0x3-whatever to 0x8-whatever
    is a change of *two* bits, not one.

    which is where the x4C seems to come from.

    Except that 0x4C is L...

    I have located in my stash of old manuals and stuff "Structured COBOL. Fundamentals and Style", Welburn and Price 1995, which uses RM COBOL-85 and even has a student edition compiler in an accompanying book.

    This book has a table that indicates how negative numbers are stored in a DISPLAY NUMERIC field on an IBM Mainframe. The sign byte is from '}' to 'R' which in EBCDIC is xD0 to xD9. When converted as characters to ASCII a negative 3 would be 'L' which is x4C in ASCII. This is probably why the MF Converting manual has x4C in the table, it assumes converting from EBCDIC to ASCII and then to MF data format. This manual is probably where JM got the idea that this was used as negative 3, though he said it was for Microfocus, which I am sure is wrong.

    Directives CHARSET"ASCII" SIGN"EBCDIC"
    < http://documentation.microfocus.com/help/index.jsp?topic=%2FGUID-0E0191D8-C39A-44D1-BA4C-D67107BAF784%2FHRCDRHCDIR1T.html >

    Oops! Should be
    < http://documentation.microfocus.com/help/topic/GUID-0E0191D8-C39A-44D1-BA4C-D67107BAF784/HRLHLHCLANU933.html?cp=6_7_5_1_2_1_5_3_0 >
    Thank you. That confirms it (probably) was an EDCDIC character sign in ASCII code. I was wrong to say it wouldn't be Microfocus, though I can't think why anyone would use those options.
    No combination of options match the Fujitsu way of indicating the sign when not sign separate.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Rick Smith@rs847925@gmail.com to comp.lang.cobol on Thu Jun 21 17:55:38 2018
    From Newsgroup: comp.lang.cobol

    On Thursday, June 21, 2018 at 6:52:12 PM UTC-4, Richard wrote:
    On Friday, June 22, 2018 at 9:41:25 AM UTC+12, Rick Smith wrote:
    On Thursday, June 21, 2018 at 5:20:38 PM UTC-4, Rick Smith wrote:
    On Thursday, June 21, 2018 at 5:11:53 PM UTC-4, Richard wrote:
    [snip]
    This book has a table that indicates how negative numbers are stored in a DISPLAY NUMERIC field on an IBM Mainframe. The sign byte is from '}' to 'R' which in EBCDIC is xD0 to xD9. When converted as characters to ASCII a negative 3 would be 'L' which is x4C in ASCII. This is probably why the MF Converting manual has x4C in the table, it assumes converting from EBCDIC to ASCII and then to MF data format. This manual is probably where JM got the idea that this was used as negative 3, though he said it was for Microfocus, which I am sure is wrong.

    Directives CHARSET"ASCII" SIGN"EBCDIC"
    < http://documentation.microfocus.com/help/index.jsp?topic=%2FGUID-0E0191D8-C39A-44D1-BA4C-D67107BAF784%2FHRCDRHCDIR1T.html >

    Oops! Should be
    < http://documentation.microfocus.com/help/topic/GUID-0E0191D8-C39A-44D1-BA4C-D67107BAF784/HRLHLHCLANU933.html?cp=6_7_5_1_2_1_5_3_0 >

    Thank you. That confirms it (probably) was an EDCDIC character sign in ASCII code. I was wrong to say it wouldn't be Microfocus, though I can't think why anyone would use those options.
    Why indeed? I found a reference in the "Compatibility Guide" for 3.2
    under "IBM/370 Mainframe Compatibility".
    "When you download input data from an IBM/370 mainframe which contains
    signed numeric USAGE DISPLAY data items without the SIGN SEPARATE clause, embedded signs are not interpreted correctly."
    The manual then suggests using the SIGN"EBCDIC" directive, to interpret
    the signs correctly at runtime.
    No combination of options match the Fujitsu way of indicating the sign when not sign separate.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard@riplin@azonic.co.nz to comp.lang.cobol on Thu Jun 21 18:15:28 2018
    From Newsgroup: comp.lang.cobol

    On Friday, June 22, 2018 at 12:55:40 PM UTC+12, Rick Smith wrote:
    On Thursday, June 21, 2018 at 6:52:12 PM UTC-4, Richard wrote:
    On Friday, June 22, 2018 at 9:41:25 AM UTC+12, Rick Smith wrote:
    On Thursday, June 21, 2018 at 5:20:38 PM UTC-4, Rick Smith wrote:
    On Thursday, June 21, 2018 at 5:11:53 PM UTC-4, Richard wrote:

    [snip]

    This book has a table that indicates how negative numbers are stored in a DISPLAY NUMERIC field on an IBM Mainframe. The sign byte is from '}' to 'R' which in EBCDIC is xD0 to xD9. When converted as characters to ASCII a negative 3 would be 'L' which is x4C in ASCII. This is probably why the MF Converting manual has x4C in the table, it assumes converting from EBCDIC to ASCII and then to MF data format. This manual is probably where JM got the idea that this was used as negative 3, though he said it was for Microfocus, which I am sure is wrong.

    Directives CHARSET"ASCII" SIGN"EBCDIC"
    < http://documentation.microfocus.com/help/index.jsp?topic=%2FGUID-0E0191D8-C39A-44D1-BA4C-D67107BAF784%2FHRCDRHCDIR1T.html >

    Oops! Should be
    < http://documentation.microfocus.com/help/topic/GUID-0E0191D8-C39A-44D1-BA4C-D67107BAF784/HRLHLHCLANU933.html?cp=6_7_5_1_2_1_5_3_0 >

    Thank you. That confirms it (probably) was an EDCDIC character sign in ASCII code. I was wrong to say it wouldn't be Microfocus, though I can't think why anyone would use those options.

    Why indeed? I found a reference in the "Compatibility Guide" for 3.2
    under "IBM/370 Mainframe Compatibility".

    "When you download input data from an IBM/370 mainframe which contains
    signed numeric USAGE DISPLAY data items without the SIGN SEPARATE clause, embedded signs are not interpreted correctly."

    The manual then suggests using the SIGN"EBCDIC" directive, to interpret
    the signs correctly at runtime.

    Of course, all numeric fields that are not display will be completely incorrect if a straight character conversion is done from EDCDIC to ASCII. Sign separate should always be used when converting from one system to another, even if it is just to a different compiler on the same machine.
    Granted that sometimes this is not available, eg in the case I described where data was being output daily from a system that could not be changed and was to go into Fujitsu.
    No combination of options match the Fujitsu way of indicating the sign when not sign separate.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Doug Miller@doug_at_milmac_dot_com@example.com to comp.lang.cobol on Fri Jun 22 12:59:35 2018
    From Newsgroup: comp.lang.cobol

    On Thu, 21 Jun 2018 14:11:52 -0700, Richard wrote:

    On Friday, June 22, 2018 at 5:28:40 AM UTC+12, Doug Miller wrote:
    Richard <riplin@azonic.co.nz> wrote in
    news:74bd04a6-29a8-419a-9042-3abd7a54463c@googlegroups.com:

    On Thursday, June 21, 2018 at 3:13:44 AM UTC+12, Doug Miller wrote:
    Richard <riplin@azonic.co.nz> wrote in
    news:51247ae4-5f99-463f-ae14-de6207c95bd0@googlegroups.com:


    HEX "31 32 53" (or 4C) are the ASCII values for the decimal
    characters '1', '2' and '3' with an extra bit set for negative.

    No, they are not.

    The third byte, if positive, would be 33 = 0110 0110

    *Neither* 4C nor 53 is 33 "with an extra bit set for negative."

    That is what happens when I take someone's word for what the codes
    instead of testing it myself. Microfocus does indicate "with an extra
    bit set for negative.". According to Microfocus Compatibility Guide
    the representation for +123 in a numeric display S9(3) field is x31
    x32 x33 and for -123 is x31 x32 x73 (not x53).

    0x73, I'll buy. 0x53, not so much.

    Fujitsu has a different sign byte depending on whether the field is
    signed or not:

    1234 PIC 9(4) x31 x32 x33 x34 1234 PIC S9(4) x31 x32 x33 x44

    Really? It changes the high-order nybble of the last byte from 0011 to
    0100, flipping *three*
    bits to indicate a positive sign? Color me just a little bit skeptical
    here.

    -1234 PIC S9(4) x31 x32 x33 x54

    And a negative sign is indicated by flipping *two* bits? (0011.... ->
    0101....)

    So for an S9(x) DISPLAY it does have "an extra bit set for negative."

    Setting an extra bit won't change 0x33 into 0x53. That's a change of
    *two* bits in the high- order nybble, from 0011 to 0101.

    RM COBOL uses 'P' to 'Y' in the last byte to indicate negative

    You sure about that? The range P..Y is 0x50..0x59; changing
    0x3-whatever to 0x8-whatever is a change of *two* bits, not one.

    which is where the x4C seems to come from.

    Except that 0x4C is L...

    I have located in my stash of old manuals and stuff "Structured COBOL. Fundamentals and Style", Welburn and Price 1995, which uses RM COBOL-85
    and even has a student edition compiler in an accompanying book.

    This book has a table that indicates how negative numbers are stored in
    a DISPLAY NUMERIC field on an IBM Mainframe. The sign byte is from '}'
    to 'R' which in EBCDIC is xD0 to xD9. When converted as characters to
    ASCII a negative 3 would be 'L' which is x4C in ASCII. This is probably
    why the MF Converting manual has x4C in the table, it assumes converting
    from EBCDIC to ASCII and then to MF data format. This manual is probably where JM got the idea that this was used as negative 3, though he said
    it was for Microfocus, which I am sure is wrong.

    Interesting as that may be, it still doesn't explain how 0x53 is
    supposedly '3' with an extra bit set to indicate negative... And of
    course 'L' is *never* a valid representation of -3 in the ASCII codeset
    (which was obviously the assumption of your post, given that you
    represented '123' as 31 32 33 instead of F1 F2 F3 as they would be in
    EBCDIC).

    It's not exactly arcane knowledge that converting numeric fields from
    EBCDIC to ASCII, or the other way around, works much more readily with separate signs than it does with embedded signs.

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard@riplin@azonic.co.nz to comp.lang.cobol on Fri Jun 22 10:56:49 2018
    From Newsgroup: comp.lang.cobol

    On Saturday, June 23, 2018 at 12:59:37 AM UTC+12, Doug Miller wrote:
    On Thu, 21 Jun 2018 14:11:52 -0700, Richard wrote:

    On Friday, June 22, 2018 at 5:28:40 AM UTC+12, Doug Miller wrote:
    Richard <riplin@azonic.co.nz> wrote in
    news:74bd04a6-29a8-419a-9042-3abd7a54463c@googlegroups.com:

    On Thursday, June 21, 2018 at 3:13:44 AM UTC+12, Doug Miller wrote:
    Richard <riplin@azonic.co.nz> wrote in
    news:51247ae4-5f99-463f-ae14-de6207c95bd0@googlegroups.com:


    HEX "31 32 53" (or 4C) are the ASCII values for the decimal
    characters '1', '2' and '3' with an extra bit set for negative.

    No, they are not.

    The third byte, if positive, would be 33 = 0110 0110

    *Neither* 4C nor 53 is 33 "with an extra bit set for negative."

    That is what happens when I take someone's word for what the codes
    instead of testing it myself. Microfocus does indicate "with an extra
    bit set for negative.". According to Microfocus Compatibility Guide
    the representation for +123 in a numeric display S9(3) field is x31
    x32 x33 and for -123 is x31 x32 x73 (not x53).

    0x73, I'll buy. 0x53, not so much.

    Fujitsu has a different sign byte depending on whether the field is
    signed or not:

    1234 PIC 9(4) x31 x32 x33 x34 1234 PIC S9(4) x31 x32 x33 x44

    Really? It changes the high-order nybble of the last byte from 0011 to
    0100, flipping *three*
    bits to indicate a positive sign? Color me just a little bit skeptical
    here.

    -1234 PIC S9(4) x31 x32 x33 x54

    And a negative sign is indicated by flipping *two* bits? (0011.... ->
    0101....)

    So for an S9(x) DISPLAY it does have "an extra bit set for negative."

    Setting an extra bit won't change 0x33 into 0x53. That's a change of
    *two* bits in the high- order nybble, from 0011 to 0101.

    RM COBOL uses 'P' to 'Y' in the last byte to indicate negative

    You sure about that? The range P..Y is 0x50..0x59; changing
    0x3-whatever to 0x8-whatever is a change of *two* bits, not one.

    which is where the x4C seems to come from.

    Except that 0x4C is L...

    I have located in my stash of old manuals and stuff "Structured COBOL. Fundamentals and Style", Welburn and Price 1995, which uses RM COBOL-85
    and even has a student edition compiler in an accompanying book.

    This book has a table that indicates how negative numbers are stored in
    a DISPLAY NUMERIC field on an IBM Mainframe. The sign byte is from '}'
    to 'R' which in EBCDIC is xD0 to xD9. When converted as characters to
    ASCII a negative 3 would be 'L' which is x4C in ASCII. This is probably
    why the MF Converting manual has x4C in the table, it assumes converting from EBCDIC to ASCII and then to MF data format. This manual is probably where JM got the idea that this was used as negative 3, though he said
    it was for Microfocus, which I am sure is wrong.

    Interesting as that may be, it still doesn't explain how 0x53 is
    supposedly '3' with an extra bit set to indicate negative...

    Because, with Fujitsu, signed numeric positive is 0x43. One more bit makes it 0x53 indicating negative (as explained earlier).

    With Fujitsu it is only UNsigned numeric that has 0x33 in the position that would otherwise be the sign.


    And of
    course 'L' is *never* a valid representation of -3 in the ASCII codeset (which was obviously the assumption of your post, given that you
    represented '123' as 31 32 33 instead of F1 F2 F3 as they would be in EBCDIC).

    It's not exactly arcane knowledge that converting numeric fields from
    EBCDIC to ASCII, or the other way around, works much more readily with separate signs than it does with embedded signs.

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Robert Wessel@robertwessel2@yahoo.com to comp.lang.cobol on Fri Jun 22 14:09:36 2018
    From Newsgroup: comp.lang.cobol

    On Fri, 22 Jun 2018 12:59:35 -0000 (UTC), Doug Miller <doug_at_milmac_dot_com@example.com> wrote:

    On Thu, 21 Jun 2018 14:11:52 -0700, Richard wrote:

    On Friday, June 22, 2018 at 5:28:40 AM UTC+12, Doug Miller wrote:
    Richard <riplin@azonic.co.nz> wrote in
    news:74bd04a6-29a8-419a-9042-3abd7a54463c@googlegroups.com:

    On Thursday, June 21, 2018 at 3:13:44 AM UTC+12, Doug Miller wrote:
    Richard <riplin@azonic.co.nz> wrote in
    news:51247ae4-5f99-463f-ae14-de6207c95bd0@googlegroups.com:


    HEX "31 32 53" (or 4C) are the ASCII values for the decimal
    characters '1', '2' and '3' with an extra bit set for negative.

    No, they are not.

    The third byte, if positive, would be 33 = 0110 0110

    *Neither* 4C nor 53 is 33 "with an extra bit set for negative."

    That is what happens when I take someone's word for what the codes
    instead of testing it myself. Microfocus does indicate "with an extra
    bit set for negative.". According to Microfocus Compatibility Guide
    the representation for +123 in a numeric display S9(3) field is x31
    x32 x33 and for -123 is x31 x32 x73 (not x53).

    0x73, I'll buy. 0x53, not so much.

    Fujitsu has a different sign byte depending on whether the field is
    signed or not:

    1234 PIC 9(4) x31 x32 x33 x34 1234 PIC S9(4) x31 x32 x33 x44

    Really? It changes the high-order nybble of the last byte from 0011 to
    0100, flipping *three*
    bits to indicate a positive sign? Color me just a little bit skeptical
    here.

    -1234 PIC S9(4) x31 x32 x33 x54

    And a negative sign is indicated by flipping *two* bits? (0011.... ->
    0101....)

    So for an S9(x) DISPLAY it does have "an extra bit set for negative."

    Setting an extra bit won't change 0x33 into 0x53. That's a change of
    *two* bits in the high- order nybble, from 0011 to 0101.

    RM COBOL uses 'P' to 'Y' in the last byte to indicate negative

    You sure about that? The range P..Y is 0x50..0x59; changing
    0x3-whatever to 0x8-whatever is a change of *two* bits, not one.

    which is where the x4C seems to come from.

    Except that 0x4C is L...

    I have located in my stash of old manuals and stuff "Structured COBOL.
    Fundamentals and Style", Welburn and Price 1995, which uses RM COBOL-85
    and even has a student edition compiler in an accompanying book.

    This book has a table that indicates how negative numbers are stored in
    a DISPLAY NUMERIC field on an IBM Mainframe. The sign byte is from '}'
    to 'R' which in EBCDIC is xD0 to xD9. When converted as characters to
    ASCII a negative 3 would be 'L' which is x4C in ASCII. This is probably
    why the MF Converting manual has x4C in the table, it assumes converting
    from EBCDIC to ASCII and then to MF data format. This manual is probably
    where JM got the idea that this was used as negative 3, though he said
    it was for Microfocus, which I am sure is wrong.

    Interesting as that may be, it still doesn't explain how 0x53 is
    supposedly '3' with an extra bit set to indicate negative... And of
    course 'L' is *never* a valid representation of -3 in the ASCII codeset >(which was obviously the assumption of your post, given that you
    represented '123' as 31 32 33 instead of F1 F2 F3 as they would be in >EBCDIC).

    It's not exactly arcane knowledge that converting numeric fields from
    EBCDIC to ASCII, or the other way around, works much more readily with >separate signs than it does with embedded signs.


    I haven't been following this too closely, but IIRC, at least one
    compiler* I've used had the option to treat translated EBCDIC
    overpunches as what they'd mean on an IBM mainframe. So after you
    downloaded (in text mode) F1 F2 D3 ("12L"), and got ASCII 31 32 4C
    ("12L"), it would still be interpreted as -123.


    *Perhaps the Realia compiler?
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From john_maybury@john_maybury@hotmail.com to comp.lang.cobol on Sat Jun 23 05:59:26 2018
    From Newsgroup: comp.lang.cobol

    In the version of the Fujitsu compiler I use at work, there are two
    options for embedded signs - IBM style where the negative digits are }JKLM... and Micro Focus style where they are pqrs... (lower case).

    The default is IBM style, the DECIMAL(MF) compiler option selects the alternative.

    I'm not sure whether the DECIMAL(MF) option is available in all Fujitsu compiler
    versions.
    --- Synchronet 3.20a-Linux NewsLink 1.114