• OT: Re: Oh d-ai-ry d-ai-ry me

    From Vir Campestris@vir.campestris@invalid.invalid to comp.lang.c on Thu Jun 5 17:23:10 2025
    From Newsgroup: comp.lang.c

    On 03/06/2025 17:06, Ar Rakin wrote:
    This is the reason why I tell people who write code that AI/LLMs can't
    ever replace them like this.  AI tools can only be a tool that you use.
    To write code, you'd still need to know something by yourself at the end
    of the day.

    As the sage said, never is a very long time.

    I started my career writing assembler. Nobody uses it any more for
    system work - it's much easier and cheaper to use a higher level
    language. Those skills I learned back then will never be performed by an
    API. But nor are they performed by humans any more (at least on that
    obsolete ISA!)

    In recent years I used C++. I understand a lot of code now is written in languages like Python. You could regard them merely as a detailed spec
    for the processes you need the computer to carry out.

    Get that spec right, and the computer behaves. Perhaps one day the AI
    will be able to read a spec in English - but it will probably have to be lawyer's English to avoid ambiguities.

    Or maybe we'll have an AI that is truly intelligent...

    Andy
    --
    Do not listen to rumour, but, if you do, do not believe it.
    Ghandi.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.lang.c on Thu Jun 5 16:28:31 2025
    From Newsgroup: comp.lang.c

    Vir Campestris <vir.campestris@invalid.invalid> writes:
    On 03/06/2025 17:06, Ar Rakin wrote:
    This is the reason why I tell people who write code that AI/LLMs can't
    ever replace them like this.  AI tools can only be a tool that you use.
    To write code, you'd still need to know something by yourself at the end
    of the day.

    As the sage said, never is a very long time.

    I started my career writing assembler. Nobody uses it any more for
    system work - it's much easier and cheaper to use a higher level
    language. Those skills I learned back then will never be performed by an >API. But nor are they performed by humans any more (at least on that >obsolete ISA!)

    In recent years I used C++. I understand a lot of code now is written in >languages like Python. You could regard them merely as a detailed spec
    for the processes you need the computer to carry out.

    Get that spec right, and the computer behaves. Perhaps one day the AI
    will be able to read a spec in English - but it will probably have to be >lawyer's English to avoid ambiguities.

    Or maybe we'll have an AI that is truly intelligent...

    I hope not. All the foibles of the human creators with no off switch.

    Don't call it AI, it's just simple machine learning and pattern matching;
    for true intelligence, self-awareness is a prerequisite. I don't think humanity really wants to go there.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ar Rakin@rakinar2@onesoftnet.eu.org to comp.lang.c on Fri Jun 6 00:39:12 2025
    From Newsgroup: comp.lang.c

    On 6/5/25 10:23 PM, Vir Campestris wrote:
    On 03/06/2025 17:06, Ar Rakin wrote:
    This is the reason why I tell people who write code that AI/LLMs can't
    ever replace them like this.  AI tools can only be a tool that you
    use. To write code, you'd still need to know something by yourself at
    the end of the day.

    As the sage said, never is a very long time.

    I started my career writing assembler. Nobody uses it any more for
    system work - it's much easier and cheaper to use a higher level
    language. Those skills I learned back then will never be performed by an API. But nor are they performed by humans any more (at least on that obsolete ISA!)

    Saying *nobody* uses assembler today would be wrong - many low level
    projects still need to write assembly code. They may be not as relevant
    as before, but it is still used. For example, compiler developers still
    need to have a very good understanding of the assembly languages of the systems they target.

    In recent years I used C++. I understand a lot of code now is written in languages like Python. You could regard them merely as a detailed spec
    for the processes you need the computer to carry out.

    Get that spec right, and the computer behaves. Perhaps one day the AI
    will be able to read a spec in English - but it will probably have to be lawyer's English to avoid ambiguities.

    Or maybe we'll have an AI that is truly intelligent...

    Andy


    Maybe. Nothing is impossible, but I wouldn't like the idea of AI/LLMs
    taking over. Wouldn't want to see another THERAC-25 incident, caused by AI/LLMs writing code.
    --
    Rakin
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Thu Jun 5 20:41:55 2025
    From Newsgroup: comp.lang.c

    On 2025-06-05, Vir Campestris <vir.campestris@invalid.invalid> wrote:
    On 03/06/2025 17:06, Ar Rakin wrote:
    This is the reason why I tell people who write code that AI/LLMs can't
    ever replace them like this.  AI tools can only be a tool that you use.
    To write code, you'd still need to know something by yourself at the end
    of the day.

    As the sage said, never is a very long time.

    Well, always/forever is a very long time. Never is its complement, and therefore extremely short: it contains no time at all.

    A fact that has never been true has been true for 0.000000 femtoseconds.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.lang.c on Thu Jun 5 14:12:50 2025
    From Newsgroup: comp.lang.c

    On 6/5/2025 11:39 AM, Ar Rakin wrote:
    On 6/5/25 10:23 PM, Vir Campestris wrote:
    On 03/06/2025 17:06, Ar Rakin wrote:
    This is the reason why I tell people who write code that AI/LLMs
    can't ever replace them like this.  AI tools can only be a tool that
    you use. To write code, you'd still need to know something by
    yourself at the end of the day.

    As the sage said, never is a very long time.

    I started my career writing assembler. Nobody uses it any more for
    system work - it's much easier and cheaper to use a higher level
    language. Those skills I learned back then will never be performed by
    an API. But nor are they performed by humans any more (at least on
    that obsolete ISA!)

    Saying *nobody* uses assembler today would be wrong - many low level projects still need to write assembly code.  They may be not as relevant
    as before, but it is still used.  For example, compiler developers still need to have a very good understanding of the assembly languages of the systems they target.

    Well, the fact that C11, C++11 handles atomic operations, membars,
    (DWCAS aside for a moment), well, I moved a little bit away from asm.
    Fwiw, here is some of my old code pre-C11/C++11:


    https://web.archive.org/web/20060214112345/http://appcore.home.comcast.net/appcore/src/cpu/i686/ac_i686_gcc_asm.html

    :^)



    In recent years I used C++. I understand a lot of code now is written
    in languages like Python. You could regard them merely as a detailed
    spec for the processes you need the computer to carry out.

    Get that spec right, and the computer behaves. Perhaps one day the AI
    will be able to read a spec in English - but it will probably have to
    be lawyer's English to avoid ambiguities.

    Or maybe we'll have an AI that is truly intelligent...

    Andy


    Maybe.  Nothing is impossible, but I wouldn't like the idea of AI/LLMs taking over.  Wouldn't want to see another THERAC-25 incident, caused by AI/LLMs writing code.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Anton Shepelev@anton.txt@g{oogle}mail.com to comp.lang.c on Mon Jun 9 14:21:55 2025
    From Newsgroup: comp.lang.c

    Richard Heathfield:

    I asked: "Write a program that is valid C90 but invalid
    C99."

    ChatGPT said:

    Certainly! To illustrate this, I'll write a C program that
    is valid in C90 but invalid in C99.

    This AI cannot speak English and answer questions as they
    are asked:

    1. You did not ask it /whether/ it were possible to write
    such a program, so its answer "Certainly" either makes
    no sense, or means agreement to fullfil your request.

    2. If it means agreement, the following sentence makes no
    sense, for what is it going to illustrate (if not the
    possiblity of such a program)?

    3. And then it cocludes with a blatant tautology,
    promising to "illustrate" the wrting of a C90-but-not-
    C99 program, by writing a C90-but-not-C99 program.

    Its diction is like that of schoolchild taught to start an
    answers by repeating the question, but not understanding how
    to do it.
    --
    () ascii ribbon campaign -- against html e-mail
    /\ www.asciiribbon.org -- against proprietary attachments
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Roberto@dash@dominus.net to comp.lang.c on Mon Jun 9 15:13:31 2025
    From Newsgroup: comp.lang.c

    After serious thinking Anton Shepelev wrote :
    Richard Heathfield:

    I asked: "Write a program that is valid C90 but invalid
    C99."

    ChatGPT said:

    Certainly! To illustrate this, I'll write a C program that
    is valid in C90 but invalid in C99.

    This AI cannot speak English and answer questions as they
    are asked:

    1. You did not ask it /whether/ it were possible to write
    such a program, so its answer "Certainly" either makes
    no sense, or means agreement to fullfil your request.

    2. If it means agreement, the following sentence makes no
    sense, for what is it going to illustrate (if not the
    possiblity of such a program)?

    3. And then it cocludes with a blatant tautology,
    promising to "illustrate" the wrting of a C90-but-not-
    C99 program, by writing a C90-but-not-C99 program.

    Its diction is like that of schoolchild taught to start an
    answers by repeating the question, but not understanding how
    to do it.

    + 1

    Thanks to underlining it.
    --
    Roberto
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Roberto@dash@dominus.net to comp.lang.c on Mon Jun 9 15:17:17 2025
    From Newsgroup: comp.lang.c

    Roberto submitted this idea :
    After serious thinking Anton Shepelev wrote :
    Richard Heathfield:

    I asked: "Write a program that is valid C90 but invalid
    C99."

    ChatGPT said:

    Certainly! To illustrate this, I'll write a C program that
    is valid in C90 but invalid in C99.

    This AI cannot speak English and answer questions as they
    are asked:

    1. You did not ask it /whether/ it were possible to write
    such a program, so its answer "Certainly" either makes
    no sense, or means agreement to fullfil your request.

    2. If it means agreement, the following sentence makes no
    sense, for what is it going to illustrate (if not the
    possiblity of such a program)?

    3. And then it cocludes with a blatant tautology,
    promising to "illustrate" the wrting of a C90-but-not-
    C99 program, by writing a C90-but-not-C99 program.

    Its diction is like that of schoolchild taught to start an
    answers by repeating the question, but not understanding how
    to do it.

    + 1

    Thanks to underlining it.

    Thanks _for_ underlining it.
    --
    Roberto
    --- Synchronet 3.21a-Linux NewsLink 1.2