What do you really use for embedded projects? Do you use "standard"
makefile or do you rely on IDE functionalities?
Nowadays every MCU manufacturers give IDE, mostly for free, usually
based on Eclipse (Atmel Studio and Microchip are probably the most
important exception).
Anyway most of them use arm gcc as the compiler.
I usually try to compile the same project for the embedded target and
the development machine, so I can speed up development and debugging. I usually use the native IDE from the manufacturer of the target and Code::Blocks (with mingw) for compilation on the development machine.
So I have two IDEs for a single project.
I'm thinking to finally move to Makefile, however I don't know if it is
a good and modern choice. Do you use better alternatives?
My major reason to move from IDE compilation to Makefile is the test. I
would start adding unit testing to my project. I understood a good
solution is to link all the object files of the production code to a
static library. In this way it will be very simple to replace production
code with testing (mocking) code, simple prepending the testing oject
files to static library of production code during linking.
I think these type of things can be managed with Makefile instead of IDE compilation.
What do you think?
On 03/12/18 09:18, pozz wrote:
What do you really use for embedded projects? Do you use "standard"
makefile or do you rely on IDE functionalities?
Nowadays every MCU manufacturers give IDE, mostly for free, usually
based on Eclipse (Atmel Studio and Microchip are probably the most
important exception).
Anyway most of them use arm gcc as the compiler.
I usually try to compile the same project for the embedded target and
the development machine, so I can speed up development and debugging. I
usually use the native IDE from the manufacturer of the target and
Code::Blocks (with mingw) for compilation on the development machine.
So I have two IDEs for a single project.
I'm thinking to finally move to Makefile, however I don't know if it is
a good and modern choice. Do you use better alternatives?
I sometimes use the IDE project management to start with, or on very
small projects. But for anything serious, I always use makefiles. I
see it as important to separate the production build process from the development - I need to know that I can always pull up the source code
for a project, do a "build", and get a bit-perfect binary image that is exactly the same as last time. This must work on different machines, preferably different OS's, and it must work over time. (My record is rebuilding a project that was a touch over 20 years old, and getting the
same binary.)
This means that the makefile specifies exactly which build toolchain (compiler, linker, library, etc.) are used - and that does not change
during a project's lifetime, without very good reason.
The IDE, and debugger, however, may change - there I will often use
newer versions with more features than the original version. And
sometimes I might use a lighter editor for a small change, rather than
the full IDE. So IDE version and build tools version are independent.
With well-designed makefiles, you can have different targets for
different purposes. "make bin" for making the embedded binary, "make
pc" for making the PC version, "make tests" for running the test code on
the pc, and so on.
My major reason to move from IDE compilation to Makefile is the test. I
would start adding unit testing to my project. I understood a good
solution is to link all the object files of the production code to a
static library. In this way it will be very simple to replace production
code with testing (mocking) code, simple prepending the testing oject
files to static library of production code during linking.
I would not bother with that. I would have different variations in the
build handled in different build tree directories.
I think these type of things can be managed with Makefile instead of IDE
compilation.
What do you think?
It can /all/ be managed from make.
Also, a well-composed makefile is more efficient than an IDE project
manager, IME. When you use Eclipse to do a build, it goes through each
file to calculate the dependencies - so that you re-compile all the
files that might be affected by the last changes, but not more than
that. But it does this dependency calculation anew each time. With
make, you can arrange to generate dependency files using gcc, and these dependency files get updated only when needed. This can save
significant time in a build when you have a lot of files.
Il 03/12/2018 11:06, David Brown ha scritto:
On 03/12/18 09:18, pozz wrote:
What do you really use for embedded projects? Do you use "standard"
makefile or do you rely on IDE functionalities?
Nowadays every MCU manufacturers give IDE, mostly for free, usually
based on Eclipse (Atmel Studio and Microchip are probably the most
important exception).
Anyway most of them use arm gcc as the compiler.
I usually try to compile the same project for the embedded target and
the development machine, so I can speed up development and debugging. I
usually use the native IDE from the manufacturer of the target and
Code::Blocks (with mingw) for compilation on the development machine.
So I have two IDEs for a single project.
I'm thinking to finally move to Makefile, however I don't know if it is
a good and modern choice. Do you use better alternatives?
I sometimes use the IDE project management to start with, or on very
small projects. But for anything serious, I always use makefiles. I
see it as important to separate the production build process from the
development - I need to know that I can always pull up the source code
for a project, do a "build", and get a bit-perfect binary image that is
exactly the same as last time. This must work on different machines,
preferably different OS's, and it must work over time. (My record is
rebuilding a project that was a touch over 20 years old, and getting the
same binary.)
This means that the makefile specifies exactly which build toolchain
(compiler, linker, library, etc.) are used - and that does not change
during a project's lifetime, without very good reason.
The IDE, and debugger, however, may change - there I will often use
newer versions with more features than the original version. And
sometimes I might use a lighter editor for a small change, rather than
the full IDE. So IDE version and build tools version are independent.
With well-designed makefiles, you can have different targets for
different purposes. "make bin" for making the embedded binary, "make
pc" for making the PC version, "make tests" for running the test code on
the pc, and so on.
Fortunately modern IDEs separate well the toolchain from the IDE itself.
Most manufacturers let us install the toolchain as a separate setup. I remember some years ago the scenario was different and the compiler is "included" in the IDE installation.
However the problem here isn't the compiler (toolchain) that nowadays is usually arm-gcc. The big issue is with libraries and includes that the manufacturer give you to save some time in writing drivers of peripherals.
I have to install the full IDE and copy the interesting headers and
libraries in my folders.
Another small issue is the linker script file that works like a charm in
the IDE when you start a new project from the wizard.
At least for me, it's very difficult to write a linker script from the scratch. You need to have a deeper understanding of the C libraries
(newlib, redlib, ...) to write a correct linker script.
My solution is to start with IDE wizard and copy the generated linker
script in my make-based project.
My major reason to move from IDE compilation to Makefile is the test. I
would start adding unit testing to my project. I understood a good
solution is to link all the object files of the production code to a
static library. In this way it will be very simple to replace production >>> code with testing (mocking) code, simple prepending the testing oject
files to static library of production code during linking.
I would not bother with that. I would have different variations in the
build handled in different build tree directories.
Could you explain?
I think these type of things can be managed with Makefile instead of IDE >>> compilation.
What do you think?
It can /all/ be managed from make.
Also, a well-composed makefile is more efficient than an IDE project
manager, IME. When you use Eclipse to do a build, it goes through each
file to calculate the dependencies - so that you re-compile all the
files that might be affected by the last changes, but not more than
that. But it does this dependency calculation anew each time. With
make, you can arrange to generate dependency files using gcc, and these
dependency files get updated only when needed. This can save
significant time in a build when you have a lot of files.
Yes, this is sure!
What do you really use for embedded projects? Do you use "standard"
makefile or do you rely on IDE functionalities?
Nowadays every MCU manufacturers give IDE, mostly for free, usually
based on Eclipse (Atmel Studio and Microchip are probably the most
important exception).
Anyway most of them use arm gcc as the compiler.
I usually try to compile the same project for the embedded target and
the development machine, so I can speed up development and debugging. I usually use the native IDE from the manufacturer of the target and Code::Blocks (with mingw) for compilation on the development machine.
So I have two IDEs for a single project.
I'm thinking to finally move to Makefile, however I don't know if it is
a good and modern choice. Do you use better alternatives?
My major reason to move from IDE compilation to Makefile is the test. I would start adding unit testing to my project. I understood a good
solution is to link all the object files of the production code to a
static library. In this way it will be very simple to replace production code with testing (mocking) code, simple prepending the testing oject
files to static library of production code during linking.
I think these type of things can be managed with Makefile instead of
IDE compilation.
What do you think?
I sometimes use the IDE project management to start with, or on very
small projects. But for anything serious, I always use makefiles. I
see it as important to separate the production build process from the development - I need to know that I can always pull up the source code
for a project, do a "build", and get a bit-perfect binary image that is exactly the same as last time.
This must work on different machines,
preferably different OS's, and it must work over time.
It impossible to overemphasize how important that is. Somebody should
be able to check out the source tree and a few tools and then type a
single command to build production firmware. And you need to be able
to _automate_ that process.
What do you really use for embedded projects? Do you use "standard"
makefile or do you rely on IDE functionalities?
Nowadays every MCU manufacturers give IDE, mostly for free, usually
based on Eclipse (Atmel Studio and Microchip are probably the most
important exception).
Anyway most of them use arm gcc as the compiler.
I usually try to compile the same project for the embedded target and
the development machine, so I can speed up development and debugging. I usually use the native IDE from the manufacturer of the target and Code::Blocks (with mingw) for compilation on the development machine.
So I have two IDEs for a single project.
I'm thinking to finally move to Makefile, however I don't know if it is
a good and modern choice. Do you use better alternatives?
My major reason to move from IDE compilation to Makefile is the test. I would start adding unit testing to my project. I understood a good
solution is to link all the object files of the production code to a
static library. In this way it will be very simple to replace production code with testing (mocking) code, simple prepending the testing oject
files to static library of production code during linking.
I think these type of things can be managed with Makefile instead of IDE compilation.
What do you think?
One approach is to put the tools into a VM or a container (eg Docker), so that when you want to build you pull the container and you get an identical build environment to the last time anyone built it.Second that!
Also, your continuous integration system can run builds and tests in
the same environment as you're developing on.
I use Emacs, makefiles, and meld.
Grant Edwards <invalid@invalid.invalid> writes:
I use Emacs, makefiles, and meld.
+1 on those. My memory isn't good enough any more to remember all
the byzantine steps through an IDE to re-complete all the tasks my
projects require.
Especially since each MCU seems to have a *different* IDE with
*different* procedures to forget...
And that's assuming they run on Linux in the first place ;-)
Grant Edwards <invalid@invalid.invalid> wrote:
It impossible to overemphasize how important that is. Somebody should
be able to check out the source tree and a few tools and then type a
single command to build production firmware. And you need to be able
to _automate_ that process.
One approach is to put the tools into a VM or a container (eg Docker), so that when you want to build you pull the container and you get an identical build environment to the last time anyone built it.
Also, your continuous integration system can run builds and tests in
the same environment as you're developing on.
Unfortunately vendors have a habit of shipping IDEs for Windows only, which makes this harder.
It's not so much of a problem for the actual
compiler - especially if that's GCC under the hood - but ancillary tools (eg configuration tools for peripherals, flash image builders, etc), which are sometimes not designed to be scripted.
(AutoIt is my worst enemy here, but it has been the only way to get the job done in some cases)
Decoupling your build from the vagaries of the IDE, even if you can trust that you'll always build on a fixed platform, is still a good thing - many IDEs still don't play nicely with version control, for example.
Theo
I've tried IDEs. I've worked with others who use IDEs and watched
them work, and compared it to how I work. It looks to me like IDEs
are a tremendous waste of time.
We use cmake for that--it allows unit testing on a PC, as you say, and
also automates the process of finding libraries, e.g. for emulating peripherals.
debugging. IIl 03/12/2018 11:06, David Brown ha scritto:
On 03/12/18 09:18, pozz wrote:
What do you really use for embedded projects? Do you use "standard"
makefile or do you rely on IDE functionalities?
Nowadays every MCU manufacturers give IDE, mostly for free, usually
based on Eclipse (Atmel Studio and Microchip are probably the most
important exception).
Anyway most of them use arm gcc as the compiler.
I usually try to compile the same project for the embedded target and
the development machine, so I can speed up development and
it isusually use the native IDE from the manufacturer of the target and
Code::Blocks (with mingw) for compilation on the development machine.
So I have two IDEs for a single project.
I'm thinking to finally move to Makefile, however I don't know if
getting thea good and modern choice. Do you use better alternatives?
I sometimes use the IDE project management to start with, or on very
small projects. But for anything serious, I always use makefiles. I
see it as important to separate the production build process from the
development - I need to know that I can always pull up the source code
for a project, do a "build", and get a bit-perfect binary image that is
exactly the same as last time. This must work on different machines,
preferably different OS's, and it must work over time. (My record is
rebuilding a project that was a touch over 20 years old, and
code onsame binary.)
This means that the makefile specifies exactly which build toolchain
(compiler, linker, library, etc.) are used - and that does not change
during a project's lifetime, without very good reason.
The IDE, and debugger, however, may change - there I will often use
newer versions with more features than the original version. And
sometimes I might use a lighter editor for a small change, rather than
the full IDE. So IDE version and build tools version are independent.
With well-designed makefiles, you can have different targets for
different purposes. "make bin" for making the embedded binary, "make
pc" for making the PC version, "make tests" for running the test
peripherals.the pc, and so on.
Fortunately modern IDEs separate well the toolchain from the IDE itself.
Most manufacturers let us install the toolchain as a separate setup. I
remember some years ago the scenario was different and the compiler is
"included" in the IDE installation.
You can do that do some extent, yes - you can choose which toolchain to
use. But your build process is still tied to the IDE - your choice of directories, compiler flags, and so on is all handled by the IDE. So
you still need the IDE to control the build, and different versions of
the IDE, or different IDEs, do not necessarily handle everything in the
same way.
However the problem here isn't the compiler (toolchain) that nowadays is
usually arm-gcc. The big issue is with libraries and includes that the
manufacturer give you to save some time in writing drivers of
test. II have to install the full IDE and copy the interesting headers and
libraries in my folders.
That's fine. Copy the headers, libraries, SDK files, whatever, into
your project folder. Then push everything to your version control
system. Make the source code independent of the SDK, the IDE, and other files - you have your toolchain (and you archive the zip/tarball of the gnu-arm-embedded release) and your project folder, and that is all you
need for the build.
Another small issue is the linker script file that works like a charm in
the IDE when you start a new project from the wizard.
At least for me, it's very difficult to write a linker script from the
scratch. You need to have a deeper understanding of the C libraries
(newlib, redlib, ...) to write a correct linker script.
My solution is to start with IDE wizard and copy the generated linker
script in my make-based project.
Again, that's fine. IDE's and their wizards are great for getting
started. They are just not great for long-term stability of the tools.
My major reason to move from IDE compilation to Makefile is the
productionwould start adding unit testing to my project. I understood a good
solution is to link all the object files of the production code to a
static library. In this way it will be very simple to replace
Ok, I got your point and I usually arrange everything similar to your description (even if I put .o, .d and .lst in the same target-dependent directory). I also have to admit that all major IDEs nowadays arrangecode with testing (mocking) code, simple prepending the testing oject
files to static library of production code during linking.
I would not bother with that. I would have different variations in the
build handled in different build tree directories.
Could you explain?
You have a tree something like this:
Source tree:
project / src / main
drivers
Build trees:
project / build / target
debug
pctest
Each build tree might have subtrees :
project / build / target / obj / main
drivers
project / build / target / deps / main
drivers
project / build / target / lst / main
drivers
And so on.
Your build trees are independent. So there is no mix of object files
built in the "target" directory for your final target board, or the
"debug" directory for the version with debugging code enabled, or the version in "pctest" for the code running on the PC, or whatever other
builds you have for your project.
On 2018-12-03, DJ Delorie <dj@delorie.com> wrote:
Grant Edwards <invalid@invalid.invalid> writes:
I use Emacs, makefiles, and meld.
+1 on those. My memory isn't good enough any more to remember all
the byzantine steps through an IDE to re-complete all the tasks my
projects require.
Especially since each MCU seems to have a *different* IDE with
*different* procedures to forget...
And that's assuming they run on Linux in the first place ;-)
The most important rule to remember is:
Never, ever, use any software written or provided by the silicon
vendor. Everytime I've failed to obey that rule, I've regretted it.
On 4/12/18 6:36 am, Grant Edwards wrote:
Never, ever, use any software written or provided by the silicon
vendor. Everytime I've failed to obey that rule, I've regretted it.
[Difficult to apply that rule for an FPGA (except some Lattice parts).]
Also, ARM seems to require that its licensee support CMSIS. This truly excellent idea seems to be terribly poorly thought-out and implemented.
You get header files that pollute your program namespace with
hundreds or thousands of symbols and macros with unintelligible
names, many of which are manufacturer-specific not even
CMSIS-related.
I know there's opencm3 which seems to be better, but still...
Standard APIs like CMSIS need *very* disciplined design and rigorous management to minimise namespace pollution. Unfortunately we don't seem
to be there, yet, unless I've missed something major.
How do people handle this?
Also, ARM seems to require that its licensee support CMSIS. This truly excellent idea seems to be terribly poorly thought-out and implemented.
You get header files that pollute your program namespace with hundreds
or thousands of symbols and macros with unintelligible names, many of
which are manufacturer-specific not even CMSIS-related.
I know there's opencm3 which seems to be better, but still...
Standard APIs like CMSIS need *very* disciplined design and rigorous management to minimise namespace pollution. Unfortunately we don't seem
to be there, yet, unless I've missed something major.
How do people handle this?
On 2018-12-03, Clifford Heath <no.spam@please.net> wrote:About CMSIS, it is wonderfull if you use only the absolutely neccessary files. I always extract from the gigabyte only the core_xxx.h files,
On 4/12/18 6:36 am, Grant Edwards wrote:
Never, ever, use any software written or provided by the silicon
vendor. Everytime I've failed to obey that rule, I've regretted it.
[Difficult to apply that rule for an FPGA (except some Lattice parts).]
True
Also, ARM seems to require that its licensee support CMSIS. This truly excellent idea seems to be terribly poorly thought-out and implemented.
You're putting that mildly. I recently development some firmware for
an NXP KL03 (Cortex-M0) part. It's a tiny part with something like
8KB of flash and a coule hundred bytes of RAM. Of course NXP provides
IDE based "sample apps" that take up a gigabyte of disk space and
includes CMSIS (which itself is hundreds (if not thousands) of files
which define APIs for all of the peripherals that comprise layer upon
layer of macros calling macros calling functions calling functions
full of other macros calling macros. Trying to build even an empty
main() using the CMSIS libraries resulted in executable images several
times larger than available flash.
I finally gave up and tossed out everything except a couple of the
lowest level include files that defined register addresses for the peripherals I cared about. Then I wrote my own functions to access peripherals and a Makefile to build the app.
In the end, I cursed myself for forgetting the rule of "no silicon
vendor software". It would have been faster to start with nothing and
begin by typing register addresses from the user manual into a .h
file.
You get header files that pollute your program namespace with
hundreds or thousands of symbols and macros with unintelligible
names, many of which are manufacturer-specific not even
CMSIS-related.
Yep, CMSIS is spectacularly, mind-numingly awful.
I know there's opencm3 which seems to be better, but still...
Standard APIs like CMSIS need *very* disciplined design and rigorous management to minimise namespace pollution. Unfortunately we don't seem
to be there, yet, unless I've missed something major.
How do people handle this?
Lots of teeth-gritting and quiet swearing.
--
Grant Edwards grant.b.edwards Yow! Mr and Mrs PED, can I
at borrow 26.7% of the RAYON
gmail.com TEXTILE production of the
INDONESIAN archipelago?
On 2018-12-03, Clifford Heath <no.spam@please.net> wrote:
On 4/12/18 6:36 am, Grant Edwards wrote:
Never, ever, use any software written or provided by the silicon
vendor. Everytime I've failed to obey that rule, I've regretted it.
[Difficult to apply that rule for an FPGA (except some Lattice parts).]
True
Also, ARM seems to require that its licensee support CMSIS. This truly
excellent idea seems to be terribly poorly thought-out and implemented.
You're putting that mildly. I recently development some firmware for
an NXP KL03 (Cortex-M0) part. It's a tiny part with something like
8KB of flash and a coule hundred bytes of RAM. Of course NXP provides
IDE based "sample apps" that take up a gigabyte of disk space and
includes CMSIS (which itself is hundreds (if not thousands) of files
which define APIs for all of the peripherals that comprise layer upon
layer of macros calling macros calling functions calling functions
full of other macros calling macros. Trying to build even an empty
main() using the CMSIS libraries resulted in executable images several
times larger than available flash.
I finally gave up and tossed out everything except a couple of the
lowest level include files that defined register addresses for the peripherals I cared about. Then I wrote my own functions to access peripherals and a Makefile to build the app.
In the end, I cursed myself for forgetting the rule of "no silicon
vendor software". It would have been faster to start with nothing and
begin by typing register addresses from the user manual into a .h
file.
You get header files that pollute your program namespace with
hundreds or thousands of symbols and macros with unintelligible
names, many of which are manufacturer-specific not even
CMSIS-related.
Yep, CMSIS is spectacularly, mind-numingly awful.
I know there's opencm3 which seems to be better, but still...
Standard APIs like CMSIS need *very* disciplined design and rigorous
management to minimise namespace pollution. Unfortunately we don't seem
to be there, yet, unless I've missed something major.
How do people handle this?
Lots of teeth-gritting and quiet swearing.
I also find startup code and libraries can be terrible - they are often written in assembly simply because they have /always/ been written in assembly, and often bear the scars of having been translated from the original 6805 assembly code (or whatever) through 68k, PPC, ARM, etc., probably by students on summer jobs.
I definitely second the "students on summer jobs" opinion. Over the
years I've seen a lot of sample/library code from silicon vendors and
most of it was truly awful. It was often clearly written by somebody
who didn't have a working knowledge of either the hardware or the
language they were using. Sometimes it just plain didn't work, but
since the authors obviously didn't understand what the hardware was
actually supposed to do, they had no way of knowing that.
Phil Hobbs wrote:
We use cmake for that--it allows unit testing on a PC, as you say, and
also automates the process of finding libraries, e.g. for emulating
peripherals.
How does it automate finding emulation libraries? That sounds like a
cool feature.
We use GNU Makefiles, but we handle the matching up of emulation
libraries with the real thing by hand. We then typically use different source directories for emulation libraries and actual drivers.
Greetings,
Jacob
On 2018-12-03, DJ Delorie <dj@delorie.com> wrote:
Grant Edwards <invalid@invalid.invalid> writes:
I use Emacs, makefiles, and meld.
+1 on those. My memory isn't good enough any more to remember all
the byzantine steps through an IDE to re-complete all the tasks my
projects require.
Especially since each MCU seems to have a *different* IDE with
*different* procedures to forget...
And that's assuming they run on Linux in the first place ;-)
The most important rule to remember is:
Never, ever, use any software written or provided by the silicon
vendor. Everytime I've failed to obey that rule, I've regretted it.
I've heard rumors that Intel at one time wrote a pretty good C
compiler for x86.
However, having used other development software from Intel, I find
that impossible to believe. [Acually, Intel MDS-800 "blue boxes"
weren't bad as long as you ran CP/M on them insteaod of, ISIS.]
And don't get me started on compilers and tools from TI, Motorola, or
various others either...
Some of them have put some effort into getting good Gnu GCC and
binutils support for their processors, and that seems to produce good results. If only they had realized that's all they really needed to
do in the _first_ place...
In defence of Eclipse, it does do a much better job of humanizing gdb
than the other things I've used, such as ddd.
On 12/3/18 2:36 PM, Grant Edwards wrote:
On 2018-12-03, DJ Delorie <dj@delorie.com> wrote:
Grant Edwards <invalid@invalid.invalid> writes:
I use Emacs, makefiles, and meld.
+1 on those. My memory isn't good enough any more to remember all
the byzantine steps through an IDE to re-complete all the tasks my
projects require.
Especially since each MCU seems to have a *different* IDE with
*different* procedures to forget...
And that's assuming they run on Linux in the first place ;-)
The most important rule to remember is:
Never, ever, use any software written or provided by the silicon
vendor. Everytime I've failed to obey that rule, I've regretted it.
I've heard rumors that Intel at one time wrote a pretty good C
compiler for x86.
I've used it, circa 2006-7, and for my application (highly multithreaded
3D electromagnetic simulation on a SMP) it was amazing--it blew the
doors off both Visual C++ and gcc under cygwin. (For sufficiently
permissive values of 'amazing', that is, i.e. 1.5-1.8x on the same
hardware.) ;)
However, having used other development software from Intel, I find
that impossible to believe. [Acually, Intel MDS-800 "blue boxes"
weren't bad as long as you ran CP/M on them insteaod of, ISIS.]
And don't get me started on compilers and tools from TI, Motorola, or
various others either...
Some of them have put some effort into getting good Gnu GCC and
binutils support for their processors, and that seems to produce good
results. If only they had realized that's all they really needed to
do in the _first_ place...
In defence of Eclipse, it does do a much better job of humanizing gdb
than the other things I've used, such as ddd.
I've heard rumors that Intel at one time wrote a pretty good C
compiler for x86.
However, having used other development software from Intel, I find
that impossible to believe. [Acually, Intel MDS-800 "blue boxes"
weren't bad as long as you ran CP/M on them insteaod of, ISIS.]
On 2018-12-03, DJ Delorie <dj@delorie.com> wrote:
Grant Edwards <invalid@invalid.invalid> writes:
I use Emacs, makefiles, and meld.
+1 on those. My memory isn't good enough any more to remember all
the byzantine steps through an IDE to re-complete all the tasks my
projects require.
Especially since each MCU seems to have a *different* IDE with
*different* procedures to forget...
And that's assuming they run on Linux in the first place ;-)
The most important rule to remember is:
Never, ever, use any software written or provided by the silicon
vendor. Everytime I've failed to obey that rule, I've regretted it.
Never, ever, use any software written or provided by the silicon
vendor. Everytime I've failed to obey that rule, I've regretted it.
How about for FPGAs? ;)
On 2018-12-05, Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote:
Never, ever, use any software written or provided by the silicon
vendor. Everytime I've failed to obey that rule, I've regretted it.
How about for FPGAs? ;)
I spent some time working with a NIOS2 core on an Altera Cyclone-something-or-other. In the beginning, somebody got conned
into using the Altera tools for doing software development. As
expected, they were horrendous. It was Eclipse with a bunch of
plugins.
IIRC, there were Eclipse plugins that called scripts written in bash
that called Perl scripts that called Java apps that generated TCL that
got fed to other scripts that generated header files... and on and on
and on. The tools required more RAM that most of our development
machines had. And it appeared to re-generate everything from scratch everytime you wanted to build anything.
After fighting what that for a few months we threw it all out and
started from scratch with the gnu toolchain, makefiles, and our own
header files we wrote with info gleaned from the above mess.
There was also some sort of gdb-server executable that we extracted
from deep within the bowels of of the Altera IDE. We had to write
some sort of wrapper for that to get it to run stand-alone and talk to
the USB byte-blaster thingy.
Once we ditched the massive pile of Altera's garbage IDE, things went
much smoother. [Until, as the project neared completion, it became
obvious that the performance of the NIOS2 was nowhere near what was
promised, and the whole thing was abandoned.]
The hardware guys were, of course, chained to the Altera VHDL IDE
software for the duration -- presumably for heinous sins committed in
a previous life.
On 12/5/18 2:34 PM, Grant Edwards wrote:
Once we ditched the massive pile of Altera's garbage IDE, things went
much smoother. [Until, as the project neared completion, it became
obvious that the performance of the NIOS2 was nowhere near what was
promised, and the whole thing was abandoned.]
The hardware guys were, of course, chained to the Altera VHDL IDE
software for the duration -- presumably for heinous sins committed
in a previous life.
Nah, you can get out of the IDE there too. You wind up having to write Makefiles that write and call Tcl scripts that communicate with a jtag-server executable that you extract from deep within the bowels of
the IDE. It's deeply unpleasant, and still preferable for production
code to using the IDE.
On 2018-12-05, Rob Gaddi <rgaddi@highlandtechnology.invalid> wrote:
On 12/5/18 2:34 PM, Grant Edwards wrote:
The hardware guys were, of course, chained to the Altera VHDL IDE
software for the duration -- presumably for heinous sins committed
in a previous life.
Nah, you can get out of the IDE there too. You wind up having to write
Makefiles that write and call Tcl scripts that communicate with a
jtag-server executable that you extract from deep within the bowels of
the IDE. It's deeply unpleasant, and still preferable for production
code to using the IDE.
Can you avoid using the IDE to compile the VHDL and build the various
formats of bitstream files?
On 12/5/18 3:50 PM, Grant Edwards wrote:
On 2018-12-05, Rob Gaddi <rgaddi@highlandtechnology.invalid> wrote:
On 12/5/18 2:34 PM, Grant Edwards wrote:
The hardware guys were, of course, chained to the Altera VHDL IDE
software for the duration -- presumably for heinous sins committed
in a previous life.
Nah, you can get out of the IDE there too. You wind up having to write >>> Makefiles that write and call Tcl scripts that communicate with a
jtag-server executable that you extract from deep within the bowels of
the IDE. It's deeply unpleasant, and still preferable for production
code to using the IDE.
Can you avoid using the IDE to compile the VHDL and build the various
formats of bitstream files?
Mostly. You (practically) have to use the IDE to configure the settings file, the .qsf, which tells it what bitstreams to make, what the sources files are, etc. Once that file is correct (and it's text, so it's VCSable), you can just run make.
See below, one of my team's Quartus makefiles. We're doing the same in Xilinx Vivado these days, which was again a tedious and awful process to
get going. I have no idea why no FPGA vendor believes that repeatable build control is something that matters to their customer base; left to
my own devices we'd be doing CI on the version control server.
[snip]
In related news, https://hdlmake.readthedocs.io seems to have come along quite a way since the last time I looked in on it. Might have to give
it a try out on my next project.
On 2018-12-05, Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote:
Never, ever, use any software written or provided by the silicon
vendor. Everytime I've failed to obey that rule, I've regretted it.
How about for FPGAs? ;)
I spent some time working with a NIOS2 core on an Altera Cyclone-something-or-other. In the beginning, somebody got conned
into using the Altera tools for doing software development. As
expected, they were horrendous. It was Eclipse with a bunch of
plugins.
IIRC, there were Eclipse plugins that called scripts written in bash
that called Perl scripts that called Java apps that generated TCL that
got fed to other scripts that generated header files... and on and on
and on. The tools required more RAM that most of our development
machines had. And it appeared to re-generate everything from scratch everytime you wanted to build anything.
After fighting what that for a few months we threw it all out and
started from scratch with the gnu toolchain, makefiles, and our own
header files we wrote with info gleaned from the above mess.
There was also some sort of gdb-server executable that we extracted
from deep within the bowels of of the Altera IDE. We had to write
some sort of wrapper for that to get it to run stand-alone and talk to
the USB byte-blaster thingy.
Once we ditched the massive pile of Altera's garbage IDE, things went
much smoother. [Until, as the project neared completion, it became
obvious that the performance of the NIOS2 was nowhere near what was
promised, and the whole thing was abandoned.]
The hardware guys were, of course, chained to the Altera VHDL IDE
software for the duration -- presumably for heinous sins committed in
a previous life.
with a single command, but that's not the biggest problem.
(my current issue is Xilinx IP Integrator's idea of schematic capture from the 1980s, complete with a mush of overlapping wires, and am trying to work out whether I can build complex SoCs entirely from tcl - in this case I
think the GUI is so awful anything is better)
On 12/6/18 12:12 AM, Theo Markettos wrote:
with a single command, but that's not the biggest problem.
(my current issue is Xilinx IP Integrator's idea of schematic capture from >> the 1980s, complete with a mush of overlapping wires, and am trying to work >> out whether I can build complex SoCs entirely from tcl - in this case I
think the GUI is so awful anything is better)
I actually like the graphical interface for putting complex top-level
blocks together (at least until VHDL-2018 comes out with interfaces),
and you can make it write bad but sufficient Tcl that you can lock down
for CI.
But have you run into the fact yet that, while the synthesis engine
supports VHDL-2008, IP Integrator doesn't. You can't even write a thin >wrapper, any VHDL-2008 anywhere in your design poisons the whole thing
such that IPI can't work with it.
I actually like the graphical interface for putting complex top-level
blocks together (at least until VHDL-2018 comes out with interfaces),
and you can make it write bad but sufficient Tcl that you can lock down
for CI.
But have you run into the fact yet that, while the synthesis engine
supports VHDL-2008, IP Integrator doesn't. You can't even write a thin wrapper, any VHDL-2008 anywhere in your design poisons the whole thing
such that IPI can't work with it.
I spent some time working with a NIOS2 core on an Altera Cyclone-something-or-other. In the beginning, somebody got conned
into using the Altera tools for doing software development. As
expected, they were horrendous. It was Eclipse with a bunch of
plugins.
Grant Edwards <invalid@invalid.invalid> wrote:
I spent some time working with a NIOS2 core on an Altera
Cyclone-something-or-other. In the beginning, somebody got conned
into using the Altera tools for doing software development. As
expected, they were horrendous. It was Eclipse with a bunch of
plugins.
It wasn't just any Eclipse, it was a fork of Eclipse from 2005. Eclipse itself got a lot better, Altera's didn't.
I inherited a teaching lab which used Altera Eclipse on NIOS2, but I'd find I'd always have to revert to the command line to work out what was actually going on. When I rewrote the lab (and we moved away from NIOS to RISC-V), I junked the IDE and went with terminals and Makefile-based development - on the basis that it's something that students should be exposed to at some point in their careers, and it makes debugging their code a lot more sane from our point of via. They still drive Quartus via the GUI (because students start not knowing what an FPGA is, and it's easier for them to understand what's happening via the GUI) but Modelsim they mostly drive through pre-supplied scripts, given Modelsim's non-intuitive GUI.
Theo
These days, they are invariably organized as plugins for standard Eclipse.
On Thursday, December 6, 2018 at 3:51:46 PM UTC-5, David Brown wrote:
These days, they are invariably organized as plugins for standard Eclipse.
Except Microchip ;-(
On 06/12/2018 22:47, Dave Nadler wrote:
On Thursday, December 6, 2018 at 3:51:46 PM UTC-5, David Brown wrote:
These days, they are invariably organized as plugins for standard Eclipse.
Except Microchip ;-(
Yes - they have NetBeans (with plugins) for PIC, and I presume they
still have Atmel's MSVS-based Visual Studio.
On Thursday, December 6, 2018 at 3:51:46 PM UTC-5, David Brown wrote:
These days, they are invariably organized as plugins for standard Eclipse.
Except Microchip ;-(
I'm thinking to finally move to Makefile, however I don't know if it is
a good and modern choice. Do you use better alternatives?
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,064 |
Nodes: | 10 (0 / 10) |
Uptime: | 150:30:25 |
Calls: | 13,691 |
Calls today: | 1 |
Files: | 186,936 |
D/L today: |
440 files (115M bytes) |
Messages: | 2,410,994 |