• Makefile or not?

    From pozz@pozzugno@gmail.com to comp.arch.embedded on Mon Dec 3 09:18:11 2018
    From Newsgroup: comp.arch.embedded

    What do you really use for embedded projects? Do you use "standard"
    makefile or do you rely on IDE functionalities?

    Nowadays every MCU manufacturers give IDE, mostly for free, usually
    based on Eclipse (Atmel Studio and Microchip are probably the most
    important exception).
    Anyway most of them use arm gcc as the compiler.

    I usually try to compile the same project for the embedded target and
    the development machine, so I can speed up development and debugging. I usually use the native IDE from the manufacturer of the target and Code::Blocks (with mingw) for compilation on the development machine.
    So I have two IDEs for a single project.

    I'm thinking to finally move to Makefile, however I don't know if it is
    a good and modern choice. Do you use better alternatives?

    My major reason to move from IDE compilation to Makefile is the test. I
    would start adding unit testing to my project. I understood a good
    solution is to link all the object files of the production code to a
    static library. In this way it will be very simple to replace production
    code with testing (mocking) code, simple prepending the testing oject
    files to static library of production code during linking.

    I think these type of things can be managed with Makefile instead of IDE compilation.

    What do you think?
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From David Brown@david.brown@hesbynett.no to comp.arch.embedded on Mon Dec 3 11:06:33 2018
    From Newsgroup: comp.arch.embedded

    On 03/12/18 09:18, pozz wrote:
    What do you really use for embedded projects? Do you use "standard"
    makefile or do you rely on IDE functionalities?

    Nowadays every MCU manufacturers give IDE, mostly for free, usually
    based on Eclipse (Atmel Studio and Microchip are probably the most
    important exception).
    Anyway most of them use arm gcc as the compiler.

    I usually try to compile the same project for the embedded target and
    the development machine, so I can speed up development and debugging. I usually use the native IDE from the manufacturer of the target and Code::Blocks (with mingw) for compilation on the development machine.
    So I have two IDEs for a single project.

    I'm thinking to finally move to Makefile, however I don't know if it is
    a good and modern choice. Do you use better alternatives?


    I sometimes use the IDE project management to start with, or on very
    small projects. But for anything serious, I always use makefiles. I
    see it as important to separate the production build process from the development - I need to know that I can always pull up the source code
    for a project, do a "build", and get a bit-perfect binary image that is
    exactly the same as last time. This must work on different machines, preferably different OS's, and it must work over time. (My record is rebuilding a project that was a touch over 20 years old, and getting the
    same binary.)

    This means that the makefile specifies exactly which build toolchain
    (compiler, linker, library, etc.) are used - and that does not change
    during a project's lifetime, without very good reason.

    The IDE, and debugger, however, may change - there I will often use
    newer versions with more features than the original version. And
    sometimes I might use a lighter editor for a small change, rather than
    the full IDE. So IDE version and build tools version are independent.

    With well-designed makefiles, you can have different targets for
    different purposes. "make bin" for making the embedded binary, "make
    pc" for making the PC version, "make tests" for running the test code on
    the pc, and so on.


    My major reason to move from IDE compilation to Makefile is the test. I
    would start adding unit testing to my project. I understood a good
    solution is to link all the object files of the production code to a
    static library. In this way it will be very simple to replace production
    code with testing (mocking) code, simple prepending the testing oject
    files to static library of production code during linking.


    I would not bother with that. I would have different variations in the
    build handled in different build tree directories.

    I think these type of things can be managed with Makefile instead of IDE compilation.

    What do you think?

    It can /all/ be managed from make.

    Also, a well-composed makefile is more efficient than an IDE project
    manager, IME. When you use Eclipse to do a build, it goes through each
    file to calculate the dependencies - so that you re-compile all the
    files that might be affected by the last changes, but not more than
    that. But it does this dependency calculation anew each time. With
    make, you can arrange to generate dependency files using gcc, and these dependency files get updated only when needed. This can save
    significant time in a build when you have a lot of files.


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From pozz@pozzugno@gmail.com to comp.arch.embedded on Mon Dec 3 12:13:54 2018
    From Newsgroup: comp.arch.embedded

    Il 03/12/2018 11:06, David Brown ha scritto:
    On 03/12/18 09:18, pozz wrote:
    What do you really use for embedded projects? Do you use "standard"
    makefile or do you rely on IDE functionalities?

    Nowadays every MCU manufacturers give IDE, mostly for free, usually
    based on Eclipse (Atmel Studio and Microchip are probably the most
    important exception).
    Anyway most of them use arm gcc as the compiler.

    I usually try to compile the same project for the embedded target and
    the development machine, so I can speed up development and debugging. I
    usually use the native IDE from the manufacturer of the target and
    Code::Blocks (with mingw) for compilation on the development machine.
    So I have two IDEs for a single project.

    I'm thinking to finally move to Makefile, however I don't know if it is
    a good and modern choice. Do you use better alternatives?


    I sometimes use the IDE project management to start with, or on very
    small projects. But for anything serious, I always use makefiles. I
    see it as important to separate the production build process from the development - I need to know that I can always pull up the source code
    for a project, do a "build", and get a bit-perfect binary image that is exactly the same as last time. This must work on different machines, preferably different OS's, and it must work over time. (My record is rebuilding a project that was a touch over 20 years old, and getting the
    same binary.)

    This means that the makefile specifies exactly which build toolchain (compiler, linker, library, etc.) are used - and that does not change
    during a project's lifetime, without very good reason.

    The IDE, and debugger, however, may change - there I will often use
    newer versions with more features than the original version. And
    sometimes I might use a lighter editor for a small change, rather than
    the full IDE. So IDE version and build tools version are independent.

    With well-designed makefiles, you can have different targets for
    different purposes. "make bin" for making the embedded binary, "make
    pc" for making the PC version, "make tests" for running the test code on
    the pc, and so on.

    Fortunately modern IDEs separate well the toolchain from the IDE itself.
    Most manufacturers let us install the toolchain as a separate setup. I remember some years ago the scenario was different and the compiler is "included" in the IDE installation.

    However the problem here isn't the compiler (toolchain) that nowadays is usually arm-gcc. The big issue is with libraries and includes that the manufacturer give you to save some time in writing drivers of peripherals.
    I have to install the full IDE and copy the interesting headers and
    libraries in my folders.

    Another small issue is the linker script file that works like a charm in
    the IDE when you start a new project from the wizard.
    At least for me, it's very difficult to write a linker script from the scratch. You need to have a deeper understanding of the C libraries
    (newlib, redlib, ...) to write a correct linker script.
    My solution is to start with IDE wizard and copy the generated linker
    script in my make-based project.


    My major reason to move from IDE compilation to Makefile is the test. I
    would start adding unit testing to my project. I understood a good
    solution is to link all the object files of the production code to a
    static library. In this way it will be very simple to replace production
    code with testing (mocking) code, simple prepending the testing oject
    files to static library of production code during linking.


    I would not bother with that. I would have different variations in the
    build handled in different build tree directories.

    Could you explain?


    I think these type of things can be managed with Makefile instead of IDE
    compilation.

    What do you think?

    It can /all/ be managed from make.

    Also, a well-composed makefile is more efficient than an IDE project
    manager, IME. When you use Eclipse to do a build, it goes through each
    file to calculate the dependencies - so that you re-compile all the
    files that might be affected by the last changes, but not more than
    that. But it does this dependency calculation anew each time. With
    make, you can arrange to generate dependency files using gcc, and these dependency files get updated only when needed. This can save
    significant time in a build when you have a lot of files.

    Yes, this is sure!


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From David Brown@david.brown@hesbynett.no to comp.arch.embedded on Mon Dec 3 12:57:05 2018
    From Newsgroup: comp.arch.embedded

    On 03/12/18 12:13, pozz wrote:
    Il 03/12/2018 11:06, David Brown ha scritto:
    On 03/12/18 09:18, pozz wrote:
    What do you really use for embedded projects? Do you use "standard"
    makefile or do you rely on IDE functionalities?

    Nowadays every MCU manufacturers give IDE, mostly for free, usually
    based on Eclipse (Atmel Studio and Microchip are probably the most
    important exception).
    Anyway most of them use arm gcc as the compiler.

    I usually try to compile the same project for the embedded target and
    the development machine, so I can speed up development and debugging. I
    usually use the native IDE from the manufacturer of the target and
    Code::Blocks (with mingw) for compilation on the development machine.
    So I have two IDEs for a single project.

    I'm thinking to finally move to Makefile, however I don't know if it is
    a good and modern choice. Do you use better alternatives?


    I sometimes use the IDE project management to start with, or on very
    small projects. But for anything serious, I always use makefiles. I
    see it as important to separate the production build process from the
    development - I need to know that I can always pull up the source code
    for a project, do a "build", and get a bit-perfect binary image that is
    exactly the same as last time. This must work on different machines,
    preferably different OS's, and it must work over time. (My record is
    rebuilding a project that was a touch over 20 years old, and getting the
    same binary.)

    This means that the makefile specifies exactly which build toolchain
    (compiler, linker, library, etc.) are used - and that does not change
    during a project's lifetime, without very good reason.

    The IDE, and debugger, however, may change - there I will often use
    newer versions with more features than the original version. And
    sometimes I might use a lighter editor for a small change, rather than
    the full IDE. So IDE version and build tools version are independent.

    With well-designed makefiles, you can have different targets for
    different purposes. "make bin" for making the embedded binary, "make
    pc" for making the PC version, "make tests" for running the test code on
    the pc, and so on.

    Fortunately modern IDEs separate well the toolchain from the IDE itself.
    Most manufacturers let us install the toolchain as a separate setup. I remember some years ago the scenario was different and the compiler is "included" in the IDE installation.


    You can do that do some extent, yes - you can choose which toolchain to
    use. But your build process is still tied to the IDE - your choice of directories, compiler flags, and so on is all handled by the IDE. So
    you still need the IDE to control the build, and different versions of
    the IDE, or different IDEs, do not necessarily handle everything in the
    same way.

    However the problem here isn't the compiler (toolchain) that nowadays is usually arm-gcc. The big issue is with libraries and includes that the manufacturer give you to save some time in writing drivers of peripherals.
    I have to install the full IDE and copy the interesting headers and
    libraries in my folders.

    That's fine. Copy the headers, libraries, SDK files, whatever, into
    your project folder. Then push everything to your version control
    system. Make the source code independent of the SDK, the IDE, and other
    files - you have your toolchain (and you archive the zip/tarball of the gnu-arm-embedded release) and your project folder, and that is all you
    need for the build.


    Another small issue is the linker script file that works like a charm in
    the IDE when you start a new project from the wizard.
    At least for me, it's very difficult to write a linker script from the scratch. You need to have a deeper understanding of the C libraries
    (newlib, redlib, ...) to write a correct linker script.
    My solution is to start with IDE wizard and copy the generated linker
    script in my make-based project.


    Again, that's fine. IDE's and their wizards are great for getting
    started. They are just not great for long-term stability of the tools.


    My major reason to move from IDE compilation to Makefile is the test. I
    would start adding unit testing to my project. I understood a good
    solution is to link all the object files of the production code to a
    static library. In this way it will be very simple to replace production >>> code with testing (mocking) code, simple prepending the testing oject
    files to static library of production code during linking.


    I would not bother with that. I would have different variations in the
    build handled in different build tree directories.

    Could you explain?


    You have a tree something like this:

    Source tree:

    project / src / main
    drivers

    Build trees:

    project / build / target
    debug
    pctest

    Each build tree might have subtrees :

    project / build / target / obj / main
    drivers
    project / build / target / deps / main
    drivers
    project / build / target / lst / main
    drivers

    And so on.

    Your build trees are independent. So there is no mix of object files
    built in the "target" directory for your final target board, or the
    "debug" directory for the version with debugging code enabled, or the
    version in "pctest" for the code running on the PC, or whatever other
    builds you have for your project.



    I think these type of things can be managed with Makefile instead of IDE >>> compilation.

    What do you think?

    It can /all/ be managed from make.

    Also, a well-composed makefile is more efficient than an IDE project
    manager, IME. When you use Eclipse to do a build, it goes through each
    file to calculate the dependencies - so that you re-compile all the
    files that might be affected by the last changes, but not more than
    that. But it does this dependency calculation anew each time. With
    make, you can arrange to generate dependency files using gcc, and these
    dependency files get updated only when needed. This can save
    significant time in a build when you have a lot of files.

    Yes, this is sure!


    Of course, if build times are important, you drop Windows and use Linux,
    and get a two to four-fold increase in build speed on similar hardware.
    And then you discover ccache on Linux and get another leap in speed.

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Grant Edwards@invalid@invalid.invalid to comp.arch.embedded on Mon Dec 3 15:30:41 2018
    From Newsgroup: comp.arch.embedded

    On 2018-12-03, pozz <pozzugno@gmail.com> wrote:

    What do you really use for embedded projects? Do you use "standard"
    makefile or do you rely on IDE functionalities?

    Gnu makefiles.

    Nowadays every MCU manufacturers give IDE, mostly for free, usually
    based on Eclipse (Atmel Studio and Microchip are probably the most
    important exception).

    And they're almost all timewasting piles of...

    Anyway most of them use arm gcc as the compiler.

    If you're going to use an IDE, it seems like you should pick one and
    stick with it so that you get _good_ at it.

    I use Emacs, makefiles, and meld.

    I usually try to compile the same project for the embedded target and
    the development machine, so I can speed up development and debugging. I usually use the native IDE from the manufacturer of the target and Code::Blocks (with mingw) for compilation on the development machine.
    So I have two IDEs for a single project.

    How awful.

    I'm thinking to finally move to Makefile, however I don't know if it is
    a good and modern choice. Do you use better alternatives?

    My major reason to move from IDE compilation to Makefile is the test. I would start adding unit testing to my project. I understood a good
    solution is to link all the object files of the production code to a
    static library. In this way it will be very simple to replace production code with testing (mocking) code, simple prepending the testing oject
    files to static library of production code during linking.

    I think these type of things can be managed with Makefile instead of
    IDE compilation.

    What do you think?

    I've tried IDEs. I've worked with others who use IDEs and watched
    them work, and compared it to how I work. It looks to me like IDEs
    are a tremendous waste of time.
    --
    Grant Edwards grant.b.edwards Yow! ... this must be what
    at it's like to be a COLLEGE
    gmail.com GRADUATE!!
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Grant Edwards@invalid@invalid.invalid to comp.arch.embedded on Mon Dec 3 15:39:27 2018
    From Newsgroup: comp.arch.embedded

    On 2018-12-03, David Brown <david.brown@hesbynett.no> wrote:

    I sometimes use the IDE project management to start with, or on very
    small projects. But for anything serious, I always use makefiles. I
    see it as important to separate the production build process from the development - I need to know that I can always pull up the source code
    for a project, do a "build", and get a bit-perfect binary image that is exactly the same as last time.

    It impossible to overemphasize how important that is. Somebody should
    be able to check out the source tree and a few tools and then type a
    single command to build production firmware. And you need to be able
    to _automate_ that process.

    If building depends on an IDE, then there's always an intermediate
    step where a person has to sit in front of a PC for a week tweaking
    project settings to get the damn thing to build on _this_ computer
    rather than on _that_ computer.

    This must work on different machines,

    And in my experience, IDEs do not. The people I know who use Eclips
    with some custom-set-of-plugins spend days and days when they need to
    build on computer B insted of computer A. I just scp "build.sh" to
    the new machine and run it. It contains a handful of Subversion
    checkout commands and a "make". And I can do it remotely. From my
    phone if needed.

    preferably different OS's, and it must work over time.

    Yes! Simply upgrading the OS often seems to render an IDE incapable
    of building a project: another week of engineering time goes down the
    drain tweaking the "project settings" to get things "just right".
    --
    Grant Edwards grant.b.edwards Yow! JAPAN is a WONDERFUL
    at planet -- I wonder if we'll
    gmail.com ever reach their level of
    COMPARATIVE SHOPPING ...
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Theo Markettos@theom+news@chiark.greenend.org.uk to comp.arch.embedded on Mon Dec 3 15:49:31 2018
    From Newsgroup: comp.arch.embedded

    Grant Edwards <invalid@invalid.invalid> wrote:
    It impossible to overemphasize how important that is. Somebody should
    be able to check out the source tree and a few tools and then type a
    single command to build production firmware. And you need to be able
    to _automate_ that process.

    One approach is to put the tools into a VM or a container (eg Docker), so
    that when you want to build you pull the container and you get an identical build environment to the last time anyone built it.
    Also, your continuous integration system can run builds and tests in
    the same environment as you're developing on.

    Unfortunately vendors have a habit of shipping IDEs for Windows only, which makes this harder. It's not so much of a problem for the actual
    compiler - especially if that's GCC under the hood - but ancillary tools (eg configuration tools for peripherals, flash image builders, etc), which are sometimes not designed to be scripted.

    (AutoIt is my worst enemy here, but it has been the only way to get the job done in some cases)

    Decoupling your build from the vagaries of the IDE, even if you can trust
    that you'll always build on a fixed platform, is still a good thing - many
    IDEs still don't play nicely with version control, for example.

    Theo
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Phil Hobbs@pcdhSpamMeSenseless@electrooptical.net to comp.arch.embedded on Mon Dec 3 11:06:09 2018
    From Newsgroup: comp.arch.embedded

    On 12/3/18 3:18 AM, pozz wrote:
    What do you really use for embedded projects? Do you use "standard"
    makefile or do you rely on IDE functionalities?

    Nowadays every MCU manufacturers give IDE, mostly for free, usually
    based on Eclipse (Atmel Studio and Microchip are probably the most
    important exception).
    Anyway most of them use arm gcc as the compiler.

    I usually try to compile the same project for the embedded target and
    the development machine, so I can speed up development and debugging. I usually use the native IDE from the manufacturer of the target and Code::Blocks (with mingw) for compilation on the development machine.
    So I have two IDEs for a single project.

    I'm thinking to finally move to Makefile, however I don't know if it is
    a good and modern choice. Do you use better alternatives?

    My major reason to move from IDE compilation to Makefile is the test. I would start adding unit testing to my project. I understood a good
    solution is to link all the object files of the production code to a
    static library. In this way it will be very simple to replace production code with testing (mocking) code, simple prepending the testing oject
    files to static library of production code during linking.

    I think these type of things can be managed with Makefile instead of IDE compilation.

    What do you think?

    We use cmake for that--it allows unit testing on a PC, as you say, and
    also automates the process of finding libraries, e.g. for emulating peripherals.

    Cheers

    Phil Hobbs
    --
    Dr Philip C D Hobbs
    Principal Consultant
    ElectroOptical Innovations LLC / Hobbs ElectroOptics
    Optics, Electro-optics, Photonics, Analog Electronics
    Briarcliff Manor NY 10510

    http://electrooptical.net
    http://hobbs-eo.com

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Dave Nadler@drn@nadler.com to comp.arch.embedded on Mon Dec 3 09:41:41 2018
    From Newsgroup: comp.arch.embedded

    On Monday, December 3, 2018 at 10:49:36 AM UTC-5, Theo Markettos wrote:
    One approach is to put the tools into a VM or a container (eg Docker), so that when you want to build you pull the container and you get an identical build environment to the last time anyone built it.
    Also, your continuous integration system can run builds and tests in
    the same environment as you're developing on.
    Second that!
    We to development in and deliver VMs to customers now, so they are CERTAIN to receive exactly the 'used for production build' versions of every tool, library, driver required for JTAG gizmo, referenced component, etc, etc, etc. Especially important when some tools won't work under latest version of Winbloze! Saves enormous headaches sometime down the road when an update must be made...
    Hope that helps,
    Best Regards, Dave
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From DJ Delorie@dj@delorie.com to comp.arch.embedded on Mon Dec 3 14:05:40 2018
    From Newsgroup: comp.arch.embedded


    Grant Edwards <invalid@invalid.invalid> writes:
    I use Emacs, makefiles, and meld.

    +1 on those. My memory isn't good enough any more to remember all the byzantine steps through an IDE to re-complete all the tasks my projects require.

    Especially since each MCU seems to have a *different* IDE with
    *different* procedures to forget...

    And that's assuming they run on Linux in the first place ;-)
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Grant Edwards@invalid@invalid.invalid to comp.arch.embedded on Mon Dec 3 19:36:50 2018
    From Newsgroup: comp.arch.embedded

    On 2018-12-03, DJ Delorie <dj@delorie.com> wrote:
    Grant Edwards <invalid@invalid.invalid> writes:
    I use Emacs, makefiles, and meld.

    +1 on those. My memory isn't good enough any more to remember all
    the byzantine steps through an IDE to re-complete all the tasks my
    projects require.

    Especially since each MCU seems to have a *different* IDE with
    *different* procedures to forget...

    And that's assuming they run on Linux in the first place ;-)

    The most important rule to remember is:

    Never, ever, use any software written or provided by the silicon
    vendor. Everytime I've failed to obey that rule, I've regretted it.

    I've heard rumors that Intel at one time wrote a pretty good C
    compiler for x86.

    However, having used other development software from Intel, I find
    that impossible to believe. [Acually, Intel MDS-800 "blue boxes"
    weren't bad as long as you ran CP/M on them insteaod of, ISIS.]

    And don't get me started on compilers and tools from TI, Motorola, or
    various others either...

    Some of them have put some effort into getting good Gnu GCC and
    binutils support for their processors, and that seems to produce good
    results. If only they had realized that's all they really needed to
    do in the _first_ place...
    --
    Grant Edwards grant.b.edwards Yow! Can you MAIL a BEAN
    at CAKE?
    gmail.com
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From David Brown@david.brown@hesbynett.no to comp.arch.embedded on Mon Dec 3 21:31:38 2018
    From Newsgroup: comp.arch.embedded

    On 03/12/2018 16:49, Theo Markettos wrote:
    Grant Edwards <invalid@invalid.invalid> wrote:
    It impossible to overemphasize how important that is. Somebody should
    be able to check out the source tree and a few tools and then type a
    single command to build production firmware. And you need to be able
    to _automate_ that process.

    One approach is to put the tools into a VM or a container (eg Docker), so that when you want to build you pull the container and you get an identical build environment to the last time anyone built it.

    That is possible, but often more than necessary. Set up your build
    sensibly, and it only depends on the one tree for the toolchain, and
    your source code tree. It should not depend on things like the versions
    of utility programs (make, sed, touch, etc.), environment variables, and
    that kind of thing.

    Sometimes, however, you can't avoid that - especially for Windows-based toolchains that store stuff in the registry and other odd places.

    Also, your continuous integration system can run builds and tests in
    the same environment as you're developing on.

    Unfortunately vendors have a habit of shipping IDEs for Windows only, which makes this harder.

    That is thankfully rare these days. There are exceptions, but most
    major vendors know that is a poor habit.

    It's not so much of a problem for the actual
    compiler - especially if that's GCC under the hood - but ancillary tools (eg configuration tools for peripherals, flash image builders, etc), which are sometimes not designed to be scripted.

    Yes, these are more likely to be an issue. Generally they are not
    needed for rebuilding the software - once you have run the wizards and
    similar tools, the job is done and the generated source can be
    preserved. But it can be an issue if you need to re-use the tools for
    dealing with changes to the setup.


    (AutoIt is my worst enemy here, but it has been the only way to get the job done in some cases)

    Decoupling your build from the vagaries of the IDE, even if you can trust that you'll always build on a fixed platform, is still a good thing - many IDEs still don't play nicely with version control, for example.


    Often IDE's have good integration with version control for the source
    files, but can be poor for the project settings and other IDE files.
    Typically that sort of thing is held in hideous XML files with
    thoughtless line breaks, making it very difficult to do comparisons and
    change management.

    Theo


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From David Brown@david.brown@hesbynett.no to comp.arch.embedded on Mon Dec 3 21:34:01 2018
    From Newsgroup: comp.arch.embedded

    On 03/12/2018 16:30, Grant Edwards wrote:


    I've tried IDEs. I've worked with others who use IDEs and watched
    them work, and compared it to how I work. It looks to me like IDEs
    are a tremendous waste of time.


    IDE's are extremely useful tools - as long as you use them for their strengths, and not their weaknesses. I use "make" for my builds, but I
    use an IDE for any serious development work. A good quality editor,
    with syntax highlighting, navigation, as-you-type checking, integration
    with errors and warnings from the builds - it is invaluable as a
    development tool.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Jacob Sparre Andersen@jacob@jacob-sparre.dk to comp.arch.embedded on Mon Dec 3 22:05:59 2018
    From Newsgroup: comp.arch.embedded

    Phil Hobbs wrote:

    We use cmake for that--it allows unit testing on a PC, as you say, and
    also automates the process of finding libraries, e.g. for emulating peripherals.

    How does it automate finding emulation libraries? That sounds like a
    cool feature.

    We use GNU Makefiles, but we handle the matching up of emulation
    libraries with the real thing by hand. We then typically use different
    source directories for emulation libraries and actual drivers.

    Greetings,

    Jacob
    --
    A password should be like a toothbrush. Use it every day;
    change it regularly; and DON'T share it with friends.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From pozz@pozzugno@gmail.com to comp.arch.embedded on Mon Dec 3 23:29:45 2018
    From Newsgroup: comp.arch.embedded

    Il 03/12/2018 12:57, David Brown ha scritto:> On 03/12/18 12:13, pozz wrote:
    Il 03/12/2018 11:06, David Brown ha scritto:
    On 03/12/18 09:18, pozz wrote:
    What do you really use for embedded projects? Do you use "standard"
    makefile or do you rely on IDE functionalities?

    Nowadays every MCU manufacturers give IDE, mostly for free, usually
    based on Eclipse (Atmel Studio and Microchip are probably the most
    important exception).
    Anyway most of them use arm gcc as the compiler.

    I usually try to compile the same project for the embedded target and
    the development machine, so I can speed up development and
    debugging. I
    usually use the native IDE from the manufacturer of the target and
    Code::Blocks (with mingw) for compilation on the development machine.
    So I have two IDEs for a single project.

    I'm thinking to finally move to Makefile, however I don't know if
    it is
    a good and modern choice. Do you use better alternatives?


    I sometimes use the IDE project management to start with, or on very
    small projects. But for anything serious, I always use makefiles. I
    see it as important to separate the production build process from the
    development - I need to know that I can always pull up the source code
    for a project, do a "build", and get a bit-perfect binary image that is
    exactly the same as last time. This must work on different machines,
    preferably different OS's, and it must work over time. (My record is
    rebuilding a project that was a touch over 20 years old, and
    getting the
    same binary.)

    This means that the makefile specifies exactly which build toolchain
    (compiler, linker, library, etc.) are used - and that does not change
    during a project's lifetime, without very good reason.

    The IDE, and debugger, however, may change - there I will often use
    newer versions with more features than the original version. And
    sometimes I might use a lighter editor for a small change, rather than
    the full IDE. So IDE version and build tools version are independent.

    With well-designed makefiles, you can have different targets for
    different purposes. "make bin" for making the embedded binary, "make
    pc" for making the PC version, "make tests" for running the test
    code on
    the pc, and so on.

    Fortunately modern IDEs separate well the toolchain from the IDE itself.
    Most manufacturers let us install the toolchain as a separate setup. I
    remember some years ago the scenario was different and the compiler is
    "included" in the IDE installation.


    You can do that do some extent, yes - you can choose which toolchain to
    use. But your build process is still tied to the IDE - your choice of directories, compiler flags, and so on is all handled by the IDE. So
    you still need the IDE to control the build, and different versions of
    the IDE, or different IDEs, do not necessarily handle everything in the
    same way.

    However the problem here isn't the compiler (toolchain) that nowadays is
    usually arm-gcc. The big issue is with libraries and includes that the
    manufacturer give you to save some time in writing drivers of
    peripherals.
    I have to install the full IDE and copy the interesting headers and
    libraries in my folders.

    That's fine. Copy the headers, libraries, SDK files, whatever, into
    your project folder. Then push everything to your version control
    system. Make the source code independent of the SDK, the IDE, and other files - you have your toolchain (and you archive the zip/tarball of the gnu-arm-embedded release) and your project folder, and that is all you
    need for the build.


    Another small issue is the linker script file that works like a charm in
    the IDE when you start a new project from the wizard.
    At least for me, it's very difficult to write a linker script from the
    scratch. You need to have a deeper understanding of the C libraries
    (newlib, redlib, ...) to write a correct linker script.
    My solution is to start with IDE wizard and copy the generated linker
    script in my make-based project.


    Again, that's fine. IDE's and their wizards are great for getting
    started. They are just not great for long-term stability of the tools.


    My major reason to move from IDE compilation to Makefile is the
    test. I
    would start adding unit testing to my project. I understood a good
    solution is to link all the object files of the production code to a
    static library. In this way it will be very simple to replace
    production
    code with testing (mocking) code, simple prepending the testing oject
    files to static library of production code during linking.


    I would not bother with that. I would have different variations in the
    build handled in different build tree directories.

    Could you explain?


    You have a tree something like this:

    Source tree:

    project / src / main
    drivers

    Build trees:

    project / build / target
    debug
    pctest

    Each build tree might have subtrees :

    project / build / target / obj / main
    drivers
    project / build / target / deps / main
    drivers
    project / build / target / lst / main
    drivers

    And so on.

    Your build trees are independent. So there is no mix of object files
    built in the "target" directory for your final target board, or the
    "debug" directory for the version with debugging code enabled, or the version in "pctest" for the code running on the PC, or whatever other
    builds you have for your project.
    Ok, I got your point and I usually arrange everything similar to your description (even if I put .o, .d and .lst in the same target-dependent directory). I also have to admit that all major IDEs nowadays arrange
    output files in this manner.

    Anyway testing is difficult, at least for me.

    Suppose you have a simple project with three source files: main.c,
    modh.c and modl.c (of course you have modh.h and modl.h).

    Now you want to create a unit testing for modh module that depends on
    modl. During test modl should be replaced with a dummy module, a mocking object. What is your approach?

    In project/tests I create a test_modl.c source file that should be
    linked against modh.o (the original production code) and
    project/tests/modl.o, the mocking object for modl.

    One approach could be to re-compile modh.c again during test
    compilation. However it's difficult to replace main modl.h with modl.h
    from mocking object in the test directory.
    modh.c will have a simple

    #include "modl.h"

    directive and this will point to modl.h in the *same* directory. I
    couldn't be able to instruct the compiler to use modl.h from tests
    directory.

    Moreover it could be useful to test the same object generated during production. I found a good approach. The production code is compiled all
    in a static library, libproduct.a. The tests are compiled against static library.
    The following command, run in the project/tests/ folder

    gcc test_modh.o modl.o libproduct.a -o test_modh.exe

    should generate a test_modh.exe with mocking object for modl and the
    *same* modh object code of production.

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Clifford Heath@no.spam@please.net to comp.arch.embedded on Tue Dec 4 09:30:28 2018
    From Newsgroup: comp.arch.embedded

    On 4/12/18 6:36 am, Grant Edwards wrote:
    On 2018-12-03, DJ Delorie <dj@delorie.com> wrote:
    Grant Edwards <invalid@invalid.invalid> writes:
    I use Emacs, makefiles, and meld.

    +1 on those. My memory isn't good enough any more to remember all
    the byzantine steps through an IDE to re-complete all the tasks my
    projects require.

    Especially since each MCU seems to have a *different* IDE with
    *different* procedures to forget...

    And that's assuming they run on Linux in the first place ;-)

    The most important rule to remember is:

    Never, ever, use any software written or provided by the silicon
    vendor. Everytime I've failed to obey that rule, I've regretted it.

    [Difficult to apply that rule for an FPGA (except some Lattice parts).]

    Also, ARM seems to require that its licensee support CMSIS. This truly excellent idea seems to be terribly poorly thought-out and implemented.
    You get header files that pollute your program namespace with hundreds
    or thousands of symbols and macros with unintelligible names, many of
    which are manufacturer-specific not even CMSIS-related.

    I know there's opencm3 which seems to be better, but still...

    Standard APIs like CMSIS need *very* disciplined design and rigorous management to minimise namespace pollution. Unfortunately we don't seem
    to be there, yet, unless I've missed something major.

    How do people handle this?

    Clifford Heath.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Grant Edwards@invalid@invalid.invalid to comp.arch.embedded on Mon Dec 3 23:19:28 2018
    From Newsgroup: comp.arch.embedded

    On 2018-12-03, Clifford Heath <no.spam@please.net> wrote:
    On 4/12/18 6:36 am, Grant Edwards wrote:

    Never, ever, use any software written or provided by the silicon
    vendor. Everytime I've failed to obey that rule, I've regretted it.

    [Difficult to apply that rule for an FPGA (except some Lattice parts).]

    True

    Also, ARM seems to require that its licensee support CMSIS. This truly excellent idea seems to be terribly poorly thought-out and implemented.

    You're putting that mildly. I recently development some firmware for
    an NXP KL03 (Cortex-M0) part. It's a tiny part with something like
    8KB of flash and a coule hundred bytes of RAM. Of course NXP provides
    IDE based "sample apps" that take up a gigabyte of disk space and
    includes CMSIS (which itself is hundreds (if not thousands) of files
    which define APIs for all of the peripherals that comprise layer upon
    layer of macros calling macros calling functions calling functions
    full of other macros calling macros. Trying to build even an empty
    main() using the CMSIS libraries resulted in executable images several
    times larger than available flash.

    I finally gave up and tossed out everything except a couple of the
    lowest level include files that defined register addresses for the
    peripherals I cared about. Then I wrote my own functions to access
    peripherals and a Makefile to build the app.

    In the end, I cursed myself for forgetting the rule of "no silicon
    vendor software". It would have been faster to start with nothing and
    begin by typing register addresses from the user manual into a .h
    file.

    You get header files that pollute your program namespace with
    hundreds or thousands of symbols and macros with unintelligible
    names, many of which are manufacturer-specific not even
    CMSIS-related.

    Yep, CMSIS is spectacularly, mind-numingly awful.

    I know there's opencm3 which seems to be better, but still...

    Standard APIs like CMSIS need *very* disciplined design and rigorous management to minimise namespace pollution. Unfortunately we don't seem
    to be there, yet, unless I've missed something major.

    How do people handle this?

    Lots of teeth-gritting and quiet swearing.
    --
    Grant Edwards grant.b.edwards Yow! Mr and Mrs PED, can I
    at borrow 26.7% of the RAYON
    gmail.com TEXTILE production of the
    INDONESIAN archipelago?
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Rob Gaddi@rgaddi@highlandtechnology.invalid to comp.arch.embedded on Mon Dec 3 15:38:06 2018
    From Newsgroup: comp.arch.embedded

    On 12/3/18 2:30 PM, Clifford Heath wrote:

    Also, ARM seems to require that its licensee support CMSIS. This truly excellent idea seems to be terribly poorly thought-out and implemented.
    You get header files that pollute your program namespace with hundreds
    or thousands of symbols and macros with unintelligible names, many of
    which are manufacturer-specific not even CMSIS-related.

    I know there's opencm3 which seems to be better, but still...

    Standard APIs like CMSIS need *very* disciplined design and rigorous management to minimise namespace pollution. Unfortunately we don't seem
    to be there, yet, unless I've missed something major.

    How do people handle this?


    Bourbon in general, though I have it on authority that a nice rum
    daiquiri is also quite effective.
    --
    Rob Gaddi, Highland Technology -- www.highlandtechnology.com
    Email address domain is currently out of order. See above to fix.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From raimond.dragomir@raimond.dragomir@gmail.com to comp.arch.embedded on Mon Dec 3 22:02:39 2018
    From Newsgroup: comp.arch.embedded

    marți, 4 decembrie 2018, 01:19:30 UTC+2, Grant Edwards a scris:
    On 2018-12-03, Clifford Heath <no.spam@please.net> wrote:
    On 4/12/18 6:36 am, Grant Edwards wrote:

    Never, ever, use any software written or provided by the silicon
    vendor. Everytime I've failed to obey that rule, I've regretted it.

    [Difficult to apply that rule for an FPGA (except some Lattice parts).]

    True

    Also, ARM seems to require that its licensee support CMSIS. This truly excellent idea seems to be terribly poorly thought-out and implemented.

    You're putting that mildly. I recently development some firmware for
    an NXP KL03 (Cortex-M0) part. It's a tiny part with something like
    8KB of flash and a coule hundred bytes of RAM. Of course NXP provides
    IDE based "sample apps" that take up a gigabyte of disk space and
    includes CMSIS (which itself is hundreds (if not thousands) of files
    which define APIs for all of the peripherals that comprise layer upon
    layer of macros calling macros calling functions calling functions
    full of other macros calling macros. Trying to build even an empty
    main() using the CMSIS libraries resulted in executable images several
    times larger than available flash.

    I finally gave up and tossed out everything except a couple of the
    lowest level include files that defined register addresses for the peripherals I cared about. Then I wrote my own functions to access peripherals and a Makefile to build the app.

    In the end, I cursed myself for forgetting the rule of "no silicon
    vendor software". It would have been faster to start with nothing and
    begin by typing register addresses from the user manual into a .h
    file.

    You get header files that pollute your program namespace with
    hundreds or thousands of symbols and macros with unintelligible
    names, many of which are manufacturer-specific not even
    CMSIS-related.

    Yep, CMSIS is spectacularly, mind-numingly awful.

    I know there's opencm3 which seems to be better, but still...

    Standard APIs like CMSIS need *very* disciplined design and rigorous management to minimise namespace pollution. Unfortunately we don't seem
    to be there, yet, unless I've missed something major.

    How do people handle this?

    Lots of teeth-gritting and quiet swearing.

    --
    Grant Edwards grant.b.edwards Yow! Mr and Mrs PED, can I
    at borrow 26.7% of the RAYON
    gmail.com TEXTILE production of the
    INDONESIAN archipelago?
    About CMSIS, it is wonderfull if you use only the absolutely neccessary files. I always extract from the gigabyte only the core_xxx.h files,
    and the single header file with the register definitions for the microcontroller.
    For example:
    core_cm0.h
    core_cmInstr.h
    core_cmFunc.h
    stm32f091xc.h
    That's simply the CMSIS for the STM32F091 chip in use.
    In fact, the core_xxx files are already the same for an architecture (cm0, cm3 etc.).
    You only need the .h file for your chip registers.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From David Brown@david.brown@hesbynett.no to comp.arch.embedded on Tue Dec 4 12:18:41 2018
    From Newsgroup: comp.arch.embedded

    On 04/12/18 00:19, Grant Edwards wrote:
    On 2018-12-03, Clifford Heath <no.spam@please.net> wrote:
    On 4/12/18 6:36 am, Grant Edwards wrote:

    Never, ever, use any software written or provided by the silicon
    vendor. Everytime I've failed to obey that rule, I've regretted it.

    [Difficult to apply that rule for an FPGA (except some Lattice parts).]

    True

    Also, ARM seems to require that its licensee support CMSIS. This truly
    excellent idea seems to be terribly poorly thought-out and implemented.

    You're putting that mildly. I recently development some firmware for
    an NXP KL03 (Cortex-M0) part. It's a tiny part with something like
    8KB of flash and a coule hundred bytes of RAM. Of course NXP provides
    IDE based "sample apps" that take up a gigabyte of disk space and
    includes CMSIS (which itself is hundreds (if not thousands) of files
    which define APIs for all of the peripherals that comprise layer upon
    layer of macros calling macros calling functions calling functions
    full of other macros calling macros. Trying to build even an empty
    main() using the CMSIS libraries resulted in executable images several
    times larger than available flash.

    I finally gave up and tossed out everything except a couple of the
    lowest level include files that defined register addresses for the peripherals I cared about. Then I wrote my own functions to access peripherals and a Makefile to build the app.

    In the end, I cursed myself for forgetting the rule of "no silicon
    vendor software". It would have been faster to start with nothing and
    begin by typing register addresses from the user manual into a .h
    file.

    You get header files that pollute your program namespace with
    hundreds or thousands of symbols and macros with unintelligible
    names, many of which are manufacturer-specific not even
    CMSIS-related.

    Yep, CMSIS is spectacularly, mind-numingly awful.

    I know there's opencm3 which seems to be better, but still...

    Standard APIs like CMSIS need *very* disciplined design and rigorous
    management to minimise namespace pollution. Unfortunately we don't seem
    to be there, yet, unless I've missed something major.

    How do people handle this?

    Lots of teeth-gritting and quiet swearing.


    There is a balance here - you can keep the good parts, and drop the bad
    parts. But sometimes it takes effort, and sometimes keeping a few bad
    parts is more practical.

    Manufacturer-provided headers for declaring peripherals are usually very convenient and save a lot of work. The same applies to the CMSIS
    headers for Cortex internal peripherals, assembly function wrappers, etc.

    On the other hand, the "wizard" and "SDK" generated code is often
    appalling, with severe lasagne programming (a dozen layers of function
    calls and abstractions for something that is just setting a peripheral
    hardware register value).

    I also find startup code and libraries can be terrible - they are often
    written in assembly simply because they have /always/ been written in
    assembly, and often bear the scars of having been translated from the
    original 6805 assembly code (or whatever) through 68k, PPC, ARM, etc.,
    probably by students on summer jobs.

    I can relate to your "SDK uses more code than the chip". I had occasion
    to use a very small Freescale 8-bit device a good number of years ago.
    The device had 2K or so of flash. The development tools were over 1 GB
    of disk space. I thought I'd use the configuration tools to save time
    reading the reference manual. The "wizard" generated code for reading
    the ADC turned out at 2.5 KB code space. On reading the manual, it
    turned out that all that was necessary for what I needed was to turn on
    one single bit in a peripheral register.


    Still, I would hate to have to write the peripheral definition files by
    hand - there is a lot of use there, if you avoid the generated code.




    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Grant Edwards@invalid@invalid.invalid to comp.arch.embedded on Tue Dec 4 15:46:30 2018
    From Newsgroup: comp.arch.embedded

    On 2018-12-04, David Brown <david.brown@hesbynett.no> wrote:

    I also find startup code and libraries can be terrible - they are often written in assembly simply because they have /always/ been written in assembly, and often bear the scars of having been translated from the original 6805 assembly code (or whatever) through 68k, PPC, ARM, etc., probably by students on summer jobs.

    I definitely second the "students on summer jobs" opinion. Over the
    years I've seen a lot of sample/library code from silicon vendors and
    most of it was truly awful. It was often clearly written by somebody
    who didn't have a working knowledge of either the hardware or the
    language they were using. Sometimes it just plain didn't work, but
    since the authors obviously didn't understand what the hardware was
    actually supposed to do, they had no way of knowing that.

    In my experience, trying to use anything from silicon vendors beyond
    the header files with register addresses/structures has always been a
    complete waste of time.
    --
    Grant Edwards grant.b.edwards Yow!
    at BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-
    gmail.com
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Theo Markettos@theom+news@chiark.greenend.org.uk to comp.arch.embedded on Tue Dec 4 20:22:39 2018
    From Newsgroup: comp.arch.embedded

    Grant Edwards <invalid@invalid.invalid> wrote:
    I definitely second the "students on summer jobs" opinion. Over the
    years I've seen a lot of sample/library code from silicon vendors and
    most of it was truly awful. It was often clearly written by somebody
    who didn't have a working knowledge of either the hardware or the
    language they were using. Sometimes it just plain didn't work, but
    since the authors obviously didn't understand what the hardware was
    actually supposed to do, they had no way of knowing that.

    That's been our experience too. We reported a bug (with included fix) in a particular vendor's module, and their response was not to fix the bug but to delete the module from their portfolio. Then a few years later the module reappeared - with the bug still present.

    Theo
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Phil Hobbs@pcdhSpamMeSenseless@electrooptical.net to comp.arch.embedded on Tue Dec 4 21:19:38 2018
    From Newsgroup: comp.arch.embedded

    On 12/3/18 4:05 PM, Jacob Sparre Andersen wrote:
    Phil Hobbs wrote:

    We use cmake for that--it allows unit testing on a PC, as you say, and
    also automates the process of finding libraries, e.g. for emulating
    peripherals.

    How does it automate finding emulation libraries? That sounds like a
    cool feature.

    We use GNU Makefiles, but we handle the matching up of emulation
    libraries with the real thing by hand. We then typically use different source directories for emulation libraries and actual drivers.

    Greetings,

    Jacob


    I'll ask my colleague who's doing most of the work on that.

    Cheers

    Phil Hobbs
    --
    Dr Philip C D Hobbs
    Principal Consultant
    ElectroOptical Innovations LLC / Hobbs ElectroOptics
    Optics, Electro-optics, Photonics, Analog Electronics
    Briarcliff Manor NY 10510

    http://electrooptical.net
    http://hobbs-eo.com

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Phil Hobbs@pcdhSpamMeSenseless@electrooptical.net to comp.arch.embedded on Tue Dec 4 21:25:43 2018
    From Newsgroup: comp.arch.embedded

    On 12/3/18 2:36 PM, Grant Edwards wrote:
    On 2018-12-03, DJ Delorie <dj@delorie.com> wrote:
    Grant Edwards <invalid@invalid.invalid> writes:
    I use Emacs, makefiles, and meld.

    +1 on those. My memory isn't good enough any more to remember all
    the byzantine steps through an IDE to re-complete all the tasks my
    projects require.

    Especially since each MCU seems to have a *different* IDE with
    *different* procedures to forget...

    And that's assuming they run on Linux in the first place ;-)

    The most important rule to remember is:

    Never, ever, use any software written or provided by the silicon
    vendor. Everytime I've failed to obey that rule, I've regretted it.

    I've heard rumors that Intel at one time wrote a pretty good C
    compiler for x86.

    I've used it, circa 2006-7, and for my application (highly multithreaded
    3D electromagnetic simulation on a SMP) it was amazing--it blew the
    doors off both Visual C++ and gcc under cygwin. (For sufficiently
    permissive values of 'amazing', that is, i.e. 1.5-1.8x on the same
    hardware.) ;)


    However, having used other development software from Intel, I find
    that impossible to believe. [Acually, Intel MDS-800 "blue boxes"
    weren't bad as long as you ran CP/M on them insteaod of, ISIS.]

    And don't get me started on compilers and tools from TI, Motorola, or
    various others either...

    Some of them have put some effort into getting good Gnu GCC and
    binutils support for their processors, and that seems to produce good results. If only they had realized that's all they really needed to
    do in the _first_ place...

    In defence of Eclipse, it does do a much better job of humanizing gdb
    than the other things I've used, such as ddd.

    Cheers

    Phil Hobbs
    --
    Dr Philip C D Hobbs
    Principal Consultant
    ElectroOptical Innovations LLC / Hobbs ElectroOptics
    Optics, Electro-optics, Photonics, Analog Electronics
    Briarcliff Manor NY 10510

    http://electrooptical.net
    http://hobbs-eo.com

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Grant Edwards@invalid@invalid.invalid to comp.arch.embedded on Wed Dec 5 03:22:49 2018
    From Newsgroup: comp.arch.embedded

    On 2018-12-05, Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote:

    In defence of Eclipse, it does do a much better job of humanizing gdb
    than the other things I've used, such as ddd.

    Ah, I remember ddd from decades back. It always seemed about 70%
    finished.

    Is it still a thing?

    ... there appears to be a Gentoo ebuild:

    # emerge --search ddd
    [ Results for search key : ddd ]
    Searching...

    * dev-util/ddd
    Latest version available: 3.3.12-r4
    Latest version installed: [ Not Installed ]
    Size of files: 5,554 KiB
    Homepage: https://www.gnu.org/software/ddd
    Description: Graphical front-end for command-line debuggers
    License: GPL-3 LGPL-3 FDL-1.1

    [ Applications found : 1 ]

    However, it looks like 3.3.12 was released 10 almost years ago.

    The TCL/Tk based GUI that sort of "came with" gdb for a while about
    10-15 years ago wasn't too bad (can't remember it's name). But, for
    the most part I prefer the gdb command line -- though I sometimes use
    gdb-mode in emacs.

    Most of the embedded stuff I work on isn't amenable to interactive breakpoint/step/examine/resume type debugging anyway. Milliseconds
    after you hit a breakpoint, all sorts of hardware and protocols will
    start to timeout, overflow, underflow, and generaly get upset. Once
    you stop, you can't expect to step/resume and get useful behavior.

    Non-embedded stuff I do in Python, and don't need a debugger. ;)

    --
    Grant




    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From David Brown@david.brown@hesbynett.no to comp.arch.embedded on Wed Dec 5 09:06:24 2018
    From Newsgroup: comp.arch.embedded

    On 05/12/18 03:25, Phil Hobbs wrote:
    On 12/3/18 2:36 PM, Grant Edwards wrote:
    On 2018-12-03, DJ Delorie <dj@delorie.com> wrote:
    Grant Edwards <invalid@invalid.invalid> writes:
    I use Emacs, makefiles, and meld.

    +1 on those. My memory isn't good enough any more to remember all
    the byzantine steps through an IDE to re-complete all the tasks my
    projects require.

    Especially since each MCU seems to have a *different* IDE with
    *different* procedures to forget...

    And that's assuming they run on Linux in the first place ;-)

    The most important rule to remember is:

    Never, ever, use any software written or provided by the silicon
    vendor. Everytime I've failed to obey that rule, I've regretted it.

    I've heard rumors that Intel at one time wrote a pretty good C
    compiler for x86.

    I've used it, circa 2006-7, and for my application (highly multithreaded
    3D electromagnetic simulation on a SMP) it was amazing--it blew the
    doors off both Visual C++ and gcc under cygwin. (For sufficiently
    permissive values of 'amazing', that is, i.e. 1.5-1.8x on the same
    hardware.) ;)

    Intel's C (and C++) compiler is still very much a major choice for the
    x86 platform, with good support for the latest standards and a fair
    degree of gcc compatibility (inline assembly format, attributes, etc.).
    It is generally considered to be the best choice for automatic vector
    SIMD code generation, and has support for parallelising code using
    multiple threads. But it is also well known for making code that is particularly poor on non-Intel x86 processors.



    However, having used other development software from Intel, I find
    that impossible to believe. [Acually, Intel MDS-800 "blue boxes"
    weren't bad as long as you ran CP/M on them insteaod of, ISIS.]

    And don't get me started on compilers and tools from TI, Motorola, or
    various others either...

    Some of them have put some effort into getting good Gnu GCC and
    binutils support for their processors, and that seems to produce good
    results. If only they had realized that's all they really needed to
    do in the _first_ place...

    In defence of Eclipse, it does do a much better job of humanizing gdb
    than the other things I've used, such as ddd.


    Agreed - Eclipse + gdb is a perfectly solid debugger. It is not always perfect, but no debugger I have ever used is always reliable or works as
    you expect. Certainly it is fine for most debugging purposes.

    In the past, I have used both ddd and gvd (which later became part of
    gps, the GNAT Programming Studio) as front ends. There are plenty of
    other gdb front-ends available - those with a strong sense of irony
    might like to try using MS Visual Studio.


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From George Neuner@gneuner2@comcast.net to comp.arch.embedded on Wed Dec 5 11:37:04 2018
    From Newsgroup: comp.arch.embedded


    On Mon, 3 Dec 2018 19:36:50 +0000 (UTC), Grant Edwards <invalid@invalid.invalid> wrote:

    I've heard rumors that Intel at one time wrote a pretty good C
    compiler for x86.

    However, having used other development software from Intel, I find
    that impossible to believe. [Acually, Intel MDS-800 "blue boxes"
    weren't bad as long as you ran CP/M on them insteaod of, ISIS.]

    Actually it was an excellent compiler: it was the absolute best for
    highly optimized code through the 80's and 90's. It was, however,
    persnickety and infamous for its barely decipherable errors and
    warnings. I think the word "difficult" sums it up.

    Intel's compiler STILL is the best on x86 for floating point and for
    auto vectorizing to use SIMD. In recent years GCC has taken the lead
    for integer code.


    For a long time Microsoft acknowledged that Intel's compiler was
    superior: it is an open secret that Windows itself through NT was
    built using Intel's tool chain, and that the OS kernel continued to be
    built using Intel up through XP. Vista was the first Windows built
    entirely on Microsoft's own tool chain.

    Through several major versions, Visual Studio included a configuration
    switch that directed it to use Intel's tools rather than Microsoft's.
    This always was possible anyway using foreign tool settings, but for a
    long time Intel's tools were supported directly.

    George
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Phil Hobbs@pcdhSpamMeSenseless@electrooptical.net to comp.arch.embedded on Wed Dec 5 15:37:44 2018
    From Newsgroup: comp.arch.embedded

    On 12/3/18 2:36 PM, Grant Edwards wrote:
    On 2018-12-03, DJ Delorie <dj@delorie.com> wrote:
    Grant Edwards <invalid@invalid.invalid> writes:
    I use Emacs, makefiles, and meld.

    +1 on those. My memory isn't good enough any more to remember all
    the byzantine steps through an IDE to re-complete all the tasks my
    projects require.

    Especially since each MCU seems to have a *different* IDE with
    *different* procedures to forget...

    And that's assuming they run on Linux in the first place ;-)

    The most important rule to remember is:

    Never, ever, use any software written or provided by the silicon
    vendor. Everytime I've failed to obey that rule, I've regretted it.

    How about for FPGAs? ;)

    Cheers

    Phil Hobbs
    --
    Dr Philip C D Hobbs
    Principal Consultant
    ElectroOptical Innovations LLC / Hobbs ElectroOptics
    Optics, Electro-optics, Photonics, Analog Electronics
    Briarcliff Manor NY 10510

    http://electrooptical.net
    http://hobbs-eo.com

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Grant Edwards@invalid@invalid.invalid to comp.arch.embedded on Wed Dec 5 22:34:02 2018
    From Newsgroup: comp.arch.embedded

    On 2018-12-05, Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote:

    Never, ever, use any software written or provided by the silicon
    vendor. Everytime I've failed to obey that rule, I've regretted it.

    How about for FPGAs? ;)

    I spent some time working with a NIOS2 core on an Altera Cyclone-something-or-other. In the beginning, somebody got conned
    into using the Altera tools for doing software development. As
    expected, they were horrendous. It was Eclipse with a bunch of
    plugins.

    IIRC, there were Eclipse plugins that called scripts written in bash
    that called Perl scripts that called Java apps that generated TCL that
    got fed to other scripts that generated header files... and on and on
    and on. The tools required more RAM that most of our development
    machines had. And it appeared to re-generate everything from scratch
    everytime you wanted to build anything.

    After fighting what that for a few months we threw it all out and
    started from scratch with the gnu toolchain, makefiles, and our own
    header files we wrote with info gleaned from the above mess.

    There was also some sort of gdb-server executable that we extracted
    from deep within the bowels of of the Altera IDE. We had to write
    some sort of wrapper for that to get it to run stand-alone and talk to
    the USB byte-blaster thingy.

    Once we ditched the massive pile of Altera's garbage IDE, things went
    much smoother. [Until, as the project neared completion, it became
    obvious that the performance of the NIOS2 was nowhere near what was
    promised, and the whole thing was abandoned.]

    The hardware guys were, of course, chained to the Altera VHDL IDE
    software for the duration -- presumably for heinous sins committed in
    a previous life.
    --
    Grant Edwards grant.b.edwards Yow! Kids, don't gross me
    at off ... "Adventures with
    gmail.com MENTAL HYGIENE" can be
    carried too FAR!
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Rob Gaddi@rgaddi@highlandtechnology.invalid to comp.arch.embedded on Wed Dec 5 14:56:51 2018
    From Newsgroup: comp.arch.embedded

    On 12/5/18 2:34 PM, Grant Edwards wrote:
    On 2018-12-05, Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote:

    Never, ever, use any software written or provided by the silicon
    vendor. Everytime I've failed to obey that rule, I've regretted it.

    How about for FPGAs? ;)

    I spent some time working with a NIOS2 core on an Altera Cyclone-something-or-other. In the beginning, somebody got conned
    into using the Altera tools for doing software development. As
    expected, they were horrendous. It was Eclipse with a bunch of
    plugins.

    IIRC, there were Eclipse plugins that called scripts written in bash
    that called Perl scripts that called Java apps that generated TCL that
    got fed to other scripts that generated header files... and on and on
    and on. The tools required more RAM that most of our development
    machines had. And it appeared to re-generate everything from scratch everytime you wanted to build anything.

    After fighting what that for a few months we threw it all out and
    started from scratch with the gnu toolchain, makefiles, and our own
    header files we wrote with info gleaned from the above mess.

    There was also some sort of gdb-server executable that we extracted
    from deep within the bowels of of the Altera IDE. We had to write
    some sort of wrapper for that to get it to run stand-alone and talk to
    the USB byte-blaster thingy.

    Once we ditched the massive pile of Altera's garbage IDE, things went
    much smoother. [Until, as the project neared completion, it became
    obvious that the performance of the NIOS2 was nowhere near what was
    promised, and the whole thing was abandoned.]

    The hardware guys were, of course, chained to the Altera VHDL IDE
    software for the duration -- presumably for heinous sins committed in
    a previous life.


    Nah, you can get out of the IDE there too. You wind up having to write Makefiles that write and call Tcl scripts that communicate with a
    jtag-server executable that you extract from deep within the bowels of
    the IDE. It's deeply unpleasant, and still preferable for production
    code to using the IDE.
    --
    Rob Gaddi, Highland Technology -- www.highlandtechnology.com
    Email address domain is currently out of order. See above to fix.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Grant Edwards@invalid@invalid.invalid to comp.arch.embedded on Wed Dec 5 23:50:04 2018
    From Newsgroup: comp.arch.embedded

    On 2018-12-05, Rob Gaddi <rgaddi@highlandtechnology.invalid> wrote:
    On 12/5/18 2:34 PM, Grant Edwards wrote:

    Once we ditched the massive pile of Altera's garbage IDE, things went
    much smoother. [Until, as the project neared completion, it became
    obvious that the performance of the NIOS2 was nowhere near what was
    promised, and the whole thing was abandoned.]

    The hardware guys were, of course, chained to the Altera VHDL IDE
    software for the duration -- presumably for heinous sins committed
    in a previous life.

    Nah, you can get out of the IDE there too. You wind up having to write Makefiles that write and call Tcl scripts that communicate with a jtag-server executable that you extract from deep within the bowels of
    the IDE. It's deeply unpleasant, and still preferable for production
    code to using the IDE.

    Can you avoid using the IDE to compile the VHDL and build the various
    formats of bitstream files?
    --
    Grant Edwards grant.b.edwards Yow! My NOSE is NUMB!
    at
    gmail.com
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Rob Gaddi@rgaddi@highlandtechnology.invalid to comp.arch.embedded on Wed Dec 5 16:27:47 2018
    From Newsgroup: comp.arch.embedded

    On 12/5/18 3:50 PM, Grant Edwards wrote:
    On 2018-12-05, Rob Gaddi <rgaddi@highlandtechnology.invalid> wrote:
    On 12/5/18 2:34 PM, Grant Edwards wrote:
    The hardware guys were, of course, chained to the Altera VHDL IDE
    software for the duration -- presumably for heinous sins committed
    in a previous life.

    Nah, you can get out of the IDE there too. You wind up having to write
    Makefiles that write and call Tcl scripts that communicate with a
    jtag-server executable that you extract from deep within the bowels of
    the IDE. It's deeply unpleasant, and still preferable for production
    code to using the IDE.

    Can you avoid using the IDE to compile the VHDL and build the various
    formats of bitstream files?


    Mostly. You (practically) have to use the IDE to configure the settings
    file, the .qsf, which tells it what bitstreams to make, what the sources
    files are, etc. Once that file is correct (and it's text, so it's
    VCSable), you can just run make.

    See below, one of my team's Quartus makefiles. We're doing the same in
    Xilinx Vivado these days, which was again a tedious and awful process to
    get going. I have no idea why no FPGA vendor believes that repeatable
    build control is something that matters to their customer base; left to
    my own devices we'd be doing CI on the version control server.

    ########################################################################

    # This is the makefile to build 22C230B, the FPGA for the V230 Analog

    # Input Module. It builds rbf 22C230B.rbf to be used by the 22E230

    # for programming on an EP3C5F256.

    #

    # The default target builds the necessary image. Other targets are:

    # reg_map : Builds the register map files.

    # clean : Removes all build products.

    #

    # Karla Vega, Highland Technology, Inc.

    # 29-May-2014

    ########################################################################



    ########################################################################

    # Tools and binary locations

    ########################################################################



    SHELL ?= /bin/bash

    QUARTUS ?= $(QUARTUS_ROOTDIR)



    IS_CYGWIN := $(findstring CYGWIN,$(shell uname))



    ifneq "" "$(IS_CYGWIN)"

    QUARTUS_BIN := $(shell cygpath "$(QUARTUS)")/bin

    else

    QUARTUS_BIN := $(QUARTUS)/bin

    endif



    quartus_map := $(QUARTUS_BIN)/quartus_map

    quartus_fit := $(QUARTUS_BIN)/quartus_fit

    quartus_asm := $(QUARTUS_BIN)/quartus_asm

    quartus_sta := $(QUARTUS_BIN)/quartus_sta

    quartus_sh := $(QUARTUS_BIN)/quartus_sh



    ########################################################################

    # Project configuration.

    ########################################################################



    PROJECT := 22C230

    REV := B

    DRAFT := 0

    DEVICE_FAMILY := "Cyclone III"

    DEVICE := EP3C5F256

    DEVICE_SPEEDGRADE := 8

    FINAL := 22C230$(REV)$(filter-out 0,$(DRAFT))



    # Bring in sources.mk, which is autogenerated from the Quartus project file.

    ifeq "$(findstring $(MAKECMDGOALS),clean)" ""

    include sources.mk

    endif



    ASSIGNMENT_FILES = $(PROJECT).qpf $(PROJECT).qsf

    OUTPUT_DIR := output

    CORE_DIR := src/cores

    REG_MAP_DIR := src/reg_map

    VHDL_DIR := src/vhdl



    # Composite source list

    SOURCES = $(22C230_SOURCES)



    # Destination list

    RBF_FILE := $(OUTPUT_DIR)/$(FINAL).rbf



    ########################################################################

    # Phony targets

    ########################################################################



    .PHONY: all reg_map



    all: $(ROM_FILE) $(OUTPUT_DIR)/$(PROJECT).asm.rpt $(OUTPUT_DIR)/$(PROJECT).sta.rpt



    clean:

    rm -rf db incremental_db $(OUTPUT_DIR) *.chg sources.mk



    ########################################################################

    # Rules

    ########################################################################



    # No implicit rules, they won't do us any good.

    .SUFFIXES:



    # Quartus forces us to keep the list of all the source files in the .qsf

    # file. In the interest of avoiding redundancy, we have a Tcl script to

    # rip these out and turn them into the sources.mk file that we include

    # earlier on. Therefore, if the .qsf file changes, or if the SOPC_TARGET

    # needs rebuilding (which changes the .qip file which changes the file

    # dependency list), we'll rebuild the sources.mk file.

    #

    sources.mk: $(ASSIGNMENT_FILES)

    $(quartus_sh) -t list_files.tcl $(PROJECT)



    # Quartus build process from the --help=makefiles option.

    STAMP := echo done >



    $(OUTPUT_DIR)/$(PROJECT).map.rpt: map.chg $(SOURCES)

    $(quartus_map) $(MAP_ARGS) $(PROJECT)

    $(STAMP) fit.chg



    $(OUTPUT_DIR)/$(PROJECT).fit.rpt: fit.chg $(OUTPUT_DIR)/$(PROJECT).map.rpt

    $(quartus_fit) $(FIT_ARGS) $(PROJECT)

    $(STAMP) asm.chg

    $(STAMP) sta.chg



    $(OUTPUT_DIR)/$(PROJECT).asm.rpt $(RBF_FILE): asm.chg $(OUTPUT_DIR)/$(PROJECT).fit.rpt

    $(quartus_asm) $(ASM_ARGS) $(PROJECT)



    $(OUTPUT_DIR)/$(PROJECT).sta.rpt: sta.chg $(OUTPUT_DIR)/$(PROJECT).fit.rpt

    $(quartus_sta) $(STA_ARGS) $(PROJECT)



    $(OUTPUT_DIR)/smart.log: $(ASSIGNMENT_FILES)

    $(quartus_sh) --determine_smart_action $(PROJECT) > $(OUTPUT_DIR)/smart.log



    ###################################################################

    # Project initialization

    ###################################################################



    map.chg:

    $(STAMP) map.chg

    fit.chg:

    $(STAMP) fit.chg

    sta.chg:

    $(STAMP) sta.chg

    asm.chg:

    $(STAMP) asm.chg



    -include local.mk



    And the referenced list_files.tcl

    # list_files.tcl
    #
    # Extracts all of the source file names from the Quartus project
    # settings file, and writes them into sources.mk so that they can
    # be pulled into the makefile.
    #
    # Rob Gaddi, Highland Technology.
    # 15-May-2013

    set source_files {VHDL_FILE VERILOG_FILE QIP_FILE}

    set projname [lindex $argv 0]
    project_open $projname

    set hOut [open {sources.mk} {WRONLY CREAT}]
    puts -nonewline $hOut "[string toupper $projname]_SOURCES := "

    foreach ftype $source_files {
    foreach_in_collection dat [get_all_global_assignments -name $ftype] {
    set fn [lindex $dat 2]
    puts $hOut "$fn \\"
    }
    }
    puts $hOut ""
    close $hOut
    project_close
    --
    Rob Gaddi, Highland Technology -- www.highlandtechnology.com
    Email address domain is currently out of order. See above to fix.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Rob Gaddi@rgaddi@highlandtechnology.invalid to comp.arch.embedded on Wed Dec 5 17:47:50 2018
    From Newsgroup: comp.arch.embedded

    On 12/5/18 4:27 PM, Rob Gaddi wrote:
    On 12/5/18 3:50 PM, Grant Edwards wrote:
    On 2018-12-05, Rob Gaddi <rgaddi@highlandtechnology.invalid> wrote:
    On 12/5/18 2:34 PM, Grant Edwards wrote:
    The hardware guys were, of course, chained to the Altera VHDL IDE
    software for the duration -- presumably for heinous sins committed
    in a previous life.

    Nah, you can get out of the IDE there too.  You wind up having to write >>> Makefiles that write and call Tcl scripts that communicate with a
    jtag-server executable that you extract from deep within the bowels of
    the IDE.  It's deeply unpleasant, and still preferable for production
    code to using the IDE.

    Can you avoid using the IDE to compile the VHDL and build the various
    formats of bitstream files?


    Mostly.  You (practically) have to use the IDE to configure the settings file, the .qsf, which tells it what bitstreams to make, what the sources files are, etc.  Once that file is correct (and it's text, so it's VCSable), you can just run make.

    See below, one of my team's Quartus makefiles.  We're doing the same in Xilinx Vivado these days, which was again a tedious and awful process to
    get going.  I have no idea why no FPGA vendor believes that repeatable build control is something that matters to their customer base; left to
    my own devices we'd be doing CI on the version control server.

    [snip]

    In related news, https://hdlmake.readthedocs.io seems to have come along
    quite a way since the last time I looked in on it. Might have to give
    it a try out on my next project.
    --
    Rob Gaddi, Highland Technology -- www.highlandtechnology.com
    Email address domain is currently out of order. See above to fix.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Theo Markettos@theom+news@chiark.greenend.org.uk to comp.arch.embedded on Thu Dec 6 08:12:53 2018
    From Newsgroup: comp.arch.embedded

    Rob Gaddi <rgaddi@highlandtechnology.invalid> wrote:
    In related news, https://hdlmake.readthedocs.io seems to have come along quite a way since the last time I looked in on it. Might have to give
    it a try out on my next project.

    We have a fully Makefile-based FPGA toolchain, which is critical for
    continuous integration builds, but generally projects are begun and tweaked from the GUI - typically it's not a one-way street (so you can open the
    project files used by the Makefile build in the GUI). While in principle
    all the tools can be driven from tcl, by the time you've worked out the
    hundred tcl statements you needed you might as well have used the GUI.

    I had a play with hdlmake as we have increasing need to do Intel and Xilinx builds from the same codebase. It handles some basic stuff, like pin assignments, but anything of complexity (eg instantiating vendor IP cores)
    are going to need the vendor tools. hdlmake does avoid having to know the incantations to call Intel/Xilinx/etc parts of the build system and replaces with a single command, but that's not the biggest problem.

    (my current issue is Xilinx IP Integrator's idea of schematic capture from
    the 1980s, complete with a mush of overlapping wires, and am trying to work
    out whether I can build complex SoCs entirely from tcl - in this case I
    think the GUI is so awful anything is better)

    Theo
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From David Brown@david.brown@hesbynett.no to comp.arch.embedded on Thu Dec 6 09:22:07 2018
    From Newsgroup: comp.arch.embedded

    On 05/12/18 23:34, Grant Edwards wrote:
    On 2018-12-05, Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote:

    Never, ever, use any software written or provided by the silicon
    vendor. Everytime I've failed to obey that rule, I've regretted it.

    How about for FPGAs? ;)

    I spent some time working with a NIOS2 core on an Altera Cyclone-something-or-other. In the beginning, somebody got conned
    into using the Altera tools for doing software development. As
    expected, they were horrendous. It was Eclipse with a bunch of
    plugins.

    IIRC, there were Eclipse plugins that called scripts written in bash
    that called Perl scripts that called Java apps that generated TCL that
    got fed to other scripts that generated header files... and on and on
    and on. The tools required more RAM that most of our development
    machines had. And it appeared to re-generate everything from scratch everytime you wanted to build anything.


    If you think that is fun, just imagine doing it on Windows - with all
    the TCL and perl running under Cygwin.

    It is a long time since I used the Nios, and I only did so very briefly
    (the project was cancelled for many reasons). But I seem to remember
    there being a lot of extra building going on due to the interaction
    between the software and the hardware. On the one side, the software
    for the Nios was made into a ROM component for the FPGA design, and thus
    meant at least a partial FPGA rebuild (and incremental builds were only
    in the expensive version of the tools, not the free ones). On the other
    side, a build in the FPGA side could mean changes to the automatically generated include files for the peripheral registers and addresses,
    triggering a software rebuild.

    But it certainly /was/ possible to separate software and hardware
    development. Typically you only have such tight integration for a small
    part of the software - a boot rom - that sets up memory and loads the
    real program from external flash. That program can be developed
    independently. (And again, separate makefiles are more efficient - but
    the IDE with plugins can make debugging nicer.)

    It is also worth noting that Eclipse has got far better since the early
    days of the NIOS2. It used to be a serious memory and processor hog,
    with few features to justify the weight. These days it still takes a
    fair chunk of memory, but it is a far lower fraction of the typical workstation. (My main Linux system generally has at least three or four distinct instances of Eclipse running at any one time, in different
    workspaces, for different projects.) And I find it to be the best
    choice for bigger projects in C, C++, and Python - as well as convenient
    for LaTeX and other coding. (But with external makefiles!).

    After fighting what that for a few months we threw it all out and
    started from scratch with the gnu toolchain, makefiles, and our own
    header files we wrote with info gleaned from the above mess.

    There was also some sort of gdb-server executable that we extracted
    from deep within the bowels of of the Altera IDE. We had to write
    some sort of wrapper for that to get it to run stand-alone and talk to
    the USB byte-blaster thingy.

    Once we ditched the massive pile of Altera's garbage IDE, things went
    much smoother. [Until, as the project neared completion, it became
    obvious that the performance of the NIOS2 was nowhere near what was
    promised, and the whole thing was abandoned.]

    The hardware guys were, of course, chained to the Altera VHDL IDE
    software for the duration -- presumably for heinous sins committed in
    a previous life.


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Rob Gaddi@rgaddi@highlandtechnology.invalid to comp.arch.embedded on Thu Dec 6 10:17:23 2018
    From Newsgroup: comp.arch.embedded

    On 12/6/18 12:12 AM, Theo Markettos wrote:
    with a single command, but that's not the biggest problem.

    (my current issue is Xilinx IP Integrator's idea of schematic capture from the 1980s, complete with a mush of overlapping wires, and am trying to work out whether I can build complex SoCs entirely from tcl - in this case I
    think the GUI is so awful anything is better)


    I actually like the graphical interface for putting complex top-level
    blocks together (at least until VHDL-2018 comes out with interfaces),
    and you can make it write bad but sufficient Tcl that you can lock down
    for CI.

    But have you run into the fact yet that, while the synthesis engine
    supports VHDL-2008, IP Integrator doesn't. You can't even write a thin wrapper, any VHDL-2008 anywhere in your design poisons the whole thing
    such that IPI can't work with it.
    --
    Rob Gaddi, Highland Technology -- www.highlandtechnology.com
    Email address domain is currently out of order. See above to fix.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From gtwrek@gtwrek@sonic.net (gtwrek) to comp.arch.embedded on Thu Dec 6 18:35:09 2018
    From Newsgroup: comp.arch.embedded

    In article <pubp3l$plq$1@dont-email.me>,
    Rob Gaddi <rgaddi@highlandtechnology.invalid> wrote:
    On 12/6/18 12:12 AM, Theo Markettos wrote:
    with a single command, but that's not the biggest problem.

    (my current issue is Xilinx IP Integrator's idea of schematic capture from >> the 1980s, complete with a mush of overlapping wires, and am trying to work >> out whether I can build complex SoCs entirely from tcl - in this case I
    think the GUI is so awful anything is better)


    I actually like the graphical interface for putting complex top-level
    blocks together (at least until VHDL-2018 comes out with interfaces),
    and you can make it write bad but sufficient Tcl that you can lock down
    for CI.

    But have you run into the fact yet that, while the synthesis engine
    supports VHDL-2008, IP Integrator doesn't. You can't even write a thin >wrapper, any VHDL-2008 anywhere in your design poisons the whole thing
    such that IPI can't work with it.

    We have a fairly straightforward build process (makefiles, and TCL)
    for our Xilinx FPGAs using non-project mode TCL. We do nightly builds
    on all our FPGAs - the current build list is ~40 FPGAs.

    For Xilinx IP we struggle up front to use the F#@#$F IP integrator or
    other GUIs to generate an example project. Then we reverse engineer the
    RTL that's usually under the covers, and use that directly. Everything
    else is thrown out. We've typed up a "Just the RTL" document which
    we've given to Xilinx to explain why we do this.

    After the time spent on the up-front reverse engineering, things work fine - never having to open the darned Xilinx IDE again.

    This thread gives make me nod my head (in a misery-loves-company sort of
    way) in that I see you software folks are basically doing the same
    thing.

    The absolute WORST part of the Xilinx flows is in their MPSoC designs
    and configuring boot-loaders and rootfs images. Here one must use their
    awful 80s style schematic capture code to configure the bootloader, and
    initial images. Yes, schematic capture to design software.

    They even crypto-sign the intermediate files (HDF) to prevent engineers from trying to create a more sane flow. Absolute insanity...

    Regards,

    Mark

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Theo Markettos@theom+news@chiark.greenend.org.uk to comp.arch.embedded on Thu Dec 6 18:39:57 2018
    From Newsgroup: comp.arch.embedded

    Rob Gaddi <rgaddi@highlandtechnology.invalid> wrote:
    I actually like the graphical interface for putting complex top-level
    blocks together (at least until VHDL-2018 comes out with interfaces),
    and you can make it write bad but sufficient Tcl that you can lock down
    for CI.

    That aspect is useful, however the idea that inputs go on the left and
    outptus on the right is a braindead hangover from analogue schematics. Typically a module has several interfaces - eg a bridge has an AXI
    slave, its clock and reset, and an AXI master with its clock and reset.
    That's two groups of each of AXI/clock/reset. So why put the
    clock/reset inputs on the left and the associated AXI master on the right?
    Why are there always wires crossing from one side of the component to the other?

    It's fine on a small design, but it's a nightmare on a complicate design.
    My current Intel Qsys design has about 40 components (mostly a lot of
    bridges of various kinds) in 4 levels of hierarchy, which would be a
    complete mess to represent in a inputs=left, outputs=right fashion.

    But have you run into the fact yet that, while the synthesis engine
    supports VHDL-2008, IP Integrator doesn't. You can't even write a thin wrapper, any VHDL-2008 anywhere in your design poisons the whole thing
    such that IPI can't work with it.

    I'm mostly composing generated IP, so avoid at least this problem...

    Theo
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Theo Markettos@theom+news@chiark.greenend.org.uk to comp.arch.embedded on Thu Dec 6 18:46:41 2018
    From Newsgroup: comp.arch.embedded

    Grant Edwards <invalid@invalid.invalid> wrote:
    I spent some time working with a NIOS2 core on an Altera Cyclone-something-or-other. In the beginning, somebody got conned
    into using the Altera tools for doing software development. As
    expected, they were horrendous. It was Eclipse with a bunch of
    plugins.

    It wasn't just any Eclipse, it was a fork of Eclipse from 2005. Eclipse
    itself got a lot better, Altera's didn't.

    I inherited a teaching lab which used Altera Eclipse on NIOS2, but I'd find
    I'd always have to revert to the command line to work out what was actually going on. When I rewrote the lab (and we moved away from NIOS to RISC-V), I junked the IDE and went with terminals and Makefile-based development - on
    the basis that it's something that students should be exposed to at some
    point in their careers, and it makes debugging their code a lot more sane
    from our point of via. They still drive Quartus via the GUI (because
    students start not knowing what an FPGA is, and it's easier for them to understand what's happening via the GUI) but Modelsim they mostly drive
    through pre-supplied scripts, given Modelsim's non-intuitive GUI.

    Theo
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From David Brown@david.brown@hesbynett.no to comp.arch.embedded on Thu Dec 6 21:51:42 2018
    From Newsgroup: comp.arch.embedded

    On 06/12/2018 19:46, Theo Markettos wrote:
    Grant Edwards <invalid@invalid.invalid> wrote:
    I spent some time working with a NIOS2 core on an Altera
    Cyclone-something-or-other. In the beginning, somebody got conned
    into using the Altera tools for doing software development. As
    expected, they were horrendous. It was Eclipse with a bunch of
    plugins.

    It wasn't just any Eclipse, it was a fork of Eclipse from 2005. Eclipse itself got a lot better, Altera's didn't.

    Yes, manufacturers' IDE's used to be done that way. They'd take a fork
    of Eclipse and modify it to fit their uses. And that meant you always
    had an old version of Eclipse, and often one that didn't work with other useful plugins (such as for version control systems). It also often
    meant that you were stuck on Windows.

    These days, they are invariably organised as plugins for standard
    Eclipse. That means that updates are much more regular - each release
    of the tools usually builds on a relatively new version of Eclipse.


    I inherited a teaching lab which used Altera Eclipse on NIOS2, but I'd find I'd always have to revert to the command line to work out what was actually going on. When I rewrote the lab (and we moved away from NIOS to RISC-V), I junked the IDE and went with terminals and Makefile-based development - on the basis that it's something that students should be exposed to at some point in their careers, and it makes debugging their code a lot more sane from our point of via. They still drive Quartus via the GUI (because students start not knowing what an FPGA is, and it's easier for them to understand what's happening via the GUI) but Modelsim they mostly drive through pre-supplied scripts, given Modelsim's non-intuitive GUI.

    Theo


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Dave Nadler@drn@nadler.com to comp.arch.embedded on Thu Dec 6 13:47:02 2018
    From Newsgroup: comp.arch.embedded

    On Thursday, December 6, 2018 at 3:51:46 PM UTC-5, David Brown wrote:
    These days, they are invariably organized as plugins for standard Eclipse.

    Except Microchip ;-(
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From David Brown@david.brown@hesbynett.no to comp.arch.embedded on Thu Dec 6 22:48:47 2018
    From Newsgroup: comp.arch.embedded

    On 06/12/2018 22:47, Dave Nadler wrote:
    On Thursday, December 6, 2018 at 3:51:46 PM UTC-5, David Brown wrote:
    These days, they are invariably organized as plugins for standard Eclipse.

    Except Microchip ;-(


    Yes - they have NetBeans (with plugins) for PIC, and I presume they
    still have Atmel's MSVS-based Visual Studio.

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Dave Nadler@drn@nadler.com to comp.arch.embedded on Thu Dec 6 14:01:00 2018
    From Newsgroup: comp.arch.embedded

    On Thursday, December 6, 2018 at 4:48:50 PM UTC-5, David Brown wrote:
    On 06/12/2018 22:47, Dave Nadler wrote:
    On Thursday, December 6, 2018 at 3:51:46 PM UTC-5, David Brown wrote:
    These days, they are invariably organized as plugins for standard Eclipse.

    Except Microchip ;-(

    Yes - they have NetBeans (with plugins) for PIC, and I presume they
    still have Atmel's MSVS-based Visual Studio.

    To ensure chaos, do Microsemi (now owned by Microchip) tools use Eclipse? https://www.investors.com/news/technology/microchip-stock-microsemi-inventory/ --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Tauno Voipio@tauno.voipio@notused.fi.invalid to comp.arch.embedded on Fri Dec 7 14:54:37 2018
    From Newsgroup: comp.arch.embedded

    On 6.12.18 23:47, Dave Nadler wrote:
    On Thursday, December 6, 2018 at 3:51:46 PM UTC-5, David Brown wrote:
    These days, they are invariably organized as plugins for standard Eclipse.

    Except Microchip ;-(


    Right - after buying Atmel, there's Atmel studio, which
    should disappear the way of dinosaurs. It forces the whole
    Microsoft IDE environment, and it is a PITA for us non-
    Windows users.

    Eclipse is actually pretty flexible, if it is used with
    standard GNU makefiles, written by the programmer and not
    the IDE.
    --

    -TV

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From queequeg@queequeg@trust.no1 (Queequeg) to comp.arch.embedded on Fri Dec 7 13:45:59 2018
    From Newsgroup: comp.arch.embedded

    pozz <pozzugno@gmail.com> wrote:

    I'm thinking to finally move to Makefile, however I don't know if it is
    a good and modern choice. Do you use better alternatives?

    If you aren't afraid of Python, take a look at scons. I usually use scons
    to do the compilation and makefile to do everything else.
    --
    https://www.youtube.com/watch?v=9lSzL1DqQn0
    --- Synchronet 3.20a-Linux NewsLink 1.114