I didn’t know either about the trick with the parentheses to run several commands in one shell process, which is sometimes necessary. I thought you had to use ; and \ at the end of lines, which is annoying to write and read.
Also, I think it’s worth stressing out that make won’t rebuild an existing target file if it’s more recent than all its dependencies. It’s very good at it and it saves a lot of time (compared to Gulp, for example, which insists on rebuilding everything every time, which gets very tedious, very quickly.)
In fact, there is a solid case in favour of using make instead of Grunt or Gulp for JavaScript builds. It’s fast, powerful, extensively documented, and only requires that the tools you use (Sass processor, minifier) include an executable (which they all SHOULD, really), you don’t need also a Gulp plugin that might not exist or work properly.
You know... I'm not sure, looking at it. Indeed, it does work perfectly fine as you've written it. I've just always handled this case like this, I'm going to have to think about what, if any, benefit the fancier syntax nets you.
I suppose there's a case to be made for documentation - there's a rule with a dependency on $(OBJ), so it makes sense we should be able to find that variable as a target. It's less clear at a glance that your rule results in the proper target.
annoying to write and read
Exactly, that's pretty much the only reason I do it like this :)
make won’t rebuild an existing target file if it’s more recent than all its dependencies
This is a really good point. You're right, I also think make is still a strong tool stacked up against more recent choices. Its very unix-y, it does what it does and nothing else, and does that one thing very well. It's definitely more flexible than those other tools.
Maybe you could have two lists of %.o files that you want to process with two different rules, so you prepend each rule with the corresponding list.
And also, as you said, for documentation/readability. I once made a big Makefile that was using ImageMagick to generate 12 derivatives from 3 intermediate formats for two types of source images. So that’s 18 rules that you only distinguish from the pattern. If I prepend it with the variable containing the list, it surely becomes more readable.
I've still got so much to learn about make. That's a complex use case...my biggest issue with makefiles is readability, so anything to help yourself out it worthwhile to me.
Data wrangler, software engineer, systems programmer, cyclist. Unix (mostly Solaris) for aeons. I talk C, Python, SQL, Performance, Java, Kafka and Makefiles.
Location
Brisbane, Australia
Education
BA (Mathematics, Modern History), University of Queensland
I rewrote the core Solaris build system (6000+ Makefiles, lead a team of 4 for 3 years to deliver it), so my make knowledge is skewed towards the make which comes with Solaris Studio, but there are lots of similarities with the GNU version.
For C code, you'd use CC and CFLAGS, you'd also specify LDFLAGS for the linker, etc etc.
When it comes to your bin/boot target, I'd have written that differently too - to make the subdir dependency explicit:
bin: @mkdir -p bin bin/boot: bin @(curlcmd....) @chmod 0555 $@
I did a lot of gnarly things with Solaris' make pattern matching rules (for which I sometimes think I still need therapy). Case in point: Solaris has a closed subrepo for delivering binary blobs to partner companies. In order to remove a lot of extra work, I rewrote a few specific install rules to put those blobs in the right place in one hit:
The last comment I'll make is that the % character only matches the same pattern in the target and dependency - it's a rather simplistic regex
$(BUILDDIR)/%.o: $(SRCDIR)/%.c
We had to work around this all the time, and that limitation alone added several months to the project while we figured out the most efficient way of doing so.
Ah, thanks so much for the tips! That's one issue with such a flexible tool - a bunch of ways to do something can mean it's hard to find the way to do it best. I can see at a glance why both your suggestions are superior but also see why I got where I did and stopped thinking. I'll keep this in mind in the future.
That's a terrifying recipe. I love it.
I did not know that about % - thanks for pointing out the distinction!
I use make in really crappy ways - very unsophisticated. Each makefile is more of a dumping ground for useful scripts to run on a project (build, run, test, clean ... stuff like that). This is a bit more advanced.
Could you talk me through two things?
help:@echo"Usage: make {deps|help}" 1>&2 &&false
What's going on with the redirect and the false at the end? (I know what the @ does).
and
Why do you need to add .PHONY at the top to the tasks with no output? I don't do this, it works OK... am I quietly breaking stuff?
Thanks so much for pointing this out - I completely forgot to cover it!
I'll mention it here in case anyone doesn't know about @ - it suppresses printing this line to stdout when make runs.
The redirect just sends the echo to stderr instead of stdout, which in practice doesn't really make a difference when invoked at the command line. Returning false signals a failed recipe. If any line in a recipe in a makefile returns false, the target stops building and make considers the target failed. Here, it's just a way to ensure that make exits immediately after displaying this help line, and as no target was built, a non-zero error code was appropriate.
None of it's strictly necessary. It'll work fine without it. One way it could be used is to make help your default rule. This will prevent anything from happening, and prompt the user to choose what they want to do.
Why do you need to add .PHONY at the top to the tasks with no output?
You don't need to. You're not breaking anything. It can prevent name collisions with actual targets if you run into that problem, and when evaluating these rules the implicit rule search is skipped. There's also a whole use case around recursive subcalls of make, but I've never run into that sort of thing. Basically it skips some useless work, which is a performance bump but chances are you're not running in to much of that!
So, you can probably continue to omit them without worrying too, but I don't think it's a terrible habit either. It also serves as documentation to some minimal degree.
Thanks for this!! I'll be honest I know enough about makefiles to compile a project that's using them, but haven't ever written more than the most basic one myself. And I know they are WAY more powerful than that!
I have the impression that the more 'powerful' my recipes get, the less target: dependency I'm thinking.
The real power of make is that it converts requirements into target files/dirs. "PHONY" targets are to be shunned as much as possible if you want to make use of this power, since they leave nothing to be required, so they're really at the end of the chain.
I don't know if I agree they should be shunned so much as used sparingly. I think they do help organize sets of subtasks, especially ask makefiles grow with many related recipes. I'd much prefer a little extra verbosity to keep my makefile organized and readable.
But make isn't intended to organize tasks, is it? It's intended to build stuff out of other stuff, honouring the dependencies.
GNU Make is a tool which controls the generation of executables and other non-source files of a program from the program's source files.
These PHONY targets mostly contain bash scripts as recipes. What would be worse if you create actual bash scripts out of them?
I've lost hours and hours trying to get my code changes into a flash image, thinking that make image would do. The makefile authors thought that new developers would be smart enough to understand that not all make targets have explicit dependencies (that would slow the build down). Surprise: I wasn't.
That's what I meant: I prefer the targets to be actual Targets, because then the rules I have to remember (and forward to new devs, and document, ...) becomes way smaller. Less room for human errors. And after all, that's what causes the biggest cost for the company.
I love seeing people rediscover tools which work really well.
If you want to step up some more, you can go to full autotools: It adds release-creation, library detectiion and clean multiplatform resource installation to Makefiles: draketo.de/light/english/free-soft...
I’m working (slowly) on building a tool to make setup of autotools as convenient as modern language environments: bitbucket.org/ArneBab/conf/
Data wrangler, software engineer, systems programmer, cyclist. Unix (mostly Solaris) for aeons. I talk C, Python, SQL, Performance, Java, Kafka and Makefiles.
Location
Brisbane, Australia
Education
BA (Mathematics, Modern History), University of Queensland
autotools' library and feature detection is definitely not clean, it's difficult to maintain and hack around the writer's assumptions, and there are much better systems around like CMake and pkg-config. [For the record, I hate CMake, but it's easier to beat a project into shape with it than autotools.]
Where does my bias come from? I've got nearly 30 years experience at this cross-platform feature detection caper if you count automake and before that xmkmf with Imakefiles. Every time I have to hack on an aclocal+friends feature check I come across comments like this: (intltool 0.50.2)
# This macro actually does too much. Some checks are only needed if # your package does certain things. But this isn't really a big deal. # Fake the existence of programs that GNU maintainers use. -*- Autoconf -*- # If the user did not use the arguments to specify the items to instantiate, # then the envvar interface is used. Set only those that are not. # We use the long form for the default assignment because of an extremely # bizarre bug on SunOS 4.1.3. if $ac_need_defaults; then test "${CONFIG_FILES+set}" = set || CONFIG_FILES=$config_files fi
SunOS 4.1.3 - an OS that even in 2014 (when that version of intltool was released) had been obsoleted many years prior. Nobody, however, had bothered to remove that comment or check.
To me, autotools are the promotion of bitrot - and a maintenance nightmare.
If you want something which goes even further, here’s a setup to build a complete book from emacs org-mode via LaTeX using autotools, with references from bibtex, and with index and glossary:
Nice article, thanks :) We use this to auto document our makefiles (see marmelab.com/blog/2016/02/29/auto-...), I thought you might find it interesting:
How’s it going, I'm a Adam, a Full-Stack Engineer, actively searching for work. I'm all about JavaScript. And Frontend but don't let that fool you - I've also got some serious Backend skills.
Location
City of Bath, UK 🇬🇧
Education
11 plus years* active enterprise development experience and a Fine art degree 🎨
How’s it going, I'm a Adam, a Full-Stack Engineer, actively searching for work. I'm all about JavaScript. And Frontend but don't let that fool you - I've also got some serious Backend skills.
Location
City of Bath, UK 🇬🇧
Education
11 plus years* active enterprise development experience and a Fine art degree 🎨
Who is teaching this stuff anyway, that's why this article is so rare and useful. I have got to the part with the := syntax and trying to visualise what it means. But I think I'm gonna have to try it out. #inspired #unsupportedInlineTags
That's what I used in the 90s. I like the idea of dependency resolution (if you need to have multiple steps like .cpp -> .o, .o -> .exe, you say to make: I want .exe and make figures out "ok for this I need foo.o and bar.o so I need to do "gcc foo.cpp" and "bar.cpp" and then I can call the linker etc.)
Awesome article.
I didn’t know about the syntax with two sets of dependencies. How is that different from just this?
I didn’t know either about the trick with the parentheses to run several commands in one shell process, which is sometimes necessary. I thought you had to use ; and \ at the end of lines, which is annoying to write and read.
Also, I think it’s worth stressing out that
make
won’t rebuild an existing target file if it’s more recent than all its dependencies. It’s very good at it and it saves a lot of time (compared to Gulp, for example, which insists on rebuilding everything every time, which gets very tedious, very quickly.)In fact, there is a solid case in favour of using
make
instead of Grunt or Gulp for JavaScript builds. It’s fast, powerful, extensively documented, and only requires that the tools you use (Sass processor, minifier) include an executable (which they all SHOULD, really), you don’t need also a Gulp plugin that might not exist or work properly.You know... I'm not sure, looking at it. Indeed, it does work perfectly fine as you've written it. I've just always handled this case like this, I'm going to have to think about what, if any, benefit the fancier syntax nets you.
I suppose there's a case to be made for documentation - there's a rule with a dependency on
$(OBJ)
, so it makes sense we should be able to find that variable as a target. It's less clear at a glance that your rule results in the proper target.Exactly, that's pretty much the only reason I do it like this :)
This is a really good point. You're right, I also think
make
is still a strong tool stacked up against more recent choices. Its very unix-y, it does what it does and nothing else, and does that one thing very well. It's definitely more flexible than those other tools.I read about static pattern rules, and I think I’m seeing a difference, but I’ll have to make tests.
web.mit.edu/gnu/doc/html/make_4.ht...
Maybe you could have two lists of
%.o
files that you want to process with two different rules, so you prepend each rule with the corresponding list.And also, as you said, for documentation/readability. I once made a big Makefile that was using ImageMagick to generate 12 derivatives from 3 intermediate formats for two types of source images. So that’s 18 rules that you only distinguish from the pattern. If I prepend it with the variable containing the list, it surely becomes more readable.
Ah, cool - thanks for the link!
I've still got so much to learn about make. That's a complex use case...my biggest issue with makefiles is readability, so anything to help yourself out it worthwhile to me.
Nice intro article, thankyou.
I rewrote the core Solaris build system (6000+ Makefiles, lead a team of 4 for 3 years to deliver it), so my make knowledge is skewed towards the make which comes with Solaris Studio, but there are lots of similarities with the GNU version.
First comment: you define your c++ compiler as
I'd have written this differently:
For C code, you'd use CC and CFLAGS, you'd also specify LDFLAGS for the linker, etc etc.
When it comes to your
bin/boot
target, I'd have written that differently too - to make the subdir dependency explicit:I did a lot of gnarly things with Solaris' make pattern matching rules
(for which I sometimes think I still need therapy). Case in point: Solaris has a closed subrepo for delivering binary blobs to partner companies. In order to remove a lot of extra work, I rewrote a few specific install rules to put those blobs in the right place in one hit:
Line noise making things faster, ftw.
The last comment I'll make is that the
%
character only matches the same pattern in the target and dependency - it's a rather simplistic regexWe had to work around this all the time, and that limitation alone added several months to the project while we figured out the most efficient way of doing so.
Ah, thanks so much for the tips! That's one issue with such a flexible tool - a bunch of ways to do something can mean it's hard to find the way to do it best. I can see at a glance why both your suggestions are superior but also see why I got where I did and stopped thinking. I'll keep this in mind in the future.
That's a terrifying recipe. I love it.
I did not know that about
%
- thanks for pointing out the distinction!Great article!
I use
make
in really crappy ways - very unsophisticated. Each makefile is more of a dumping ground for useful scripts to run on a project (build, run, test, clean ... stuff like that). This is a bit more advanced.Could you talk me through two things?
What's going on with the redirect and the
false
at the end? (I know what the@
does).and
Why do you need to add
.PHONY
at the top to the tasks with no output? I don't do this, it works OK... am I quietly breaking stuff?Thanks!
Thanks so much for pointing this out - I completely forgot to cover it!
I'll mention it here in case anyone doesn't know about
@
- it suppresses printing this line tostdout
whenmake
runs.The redirect just sends the
echo
tostderr
instead ofstdout
, which in practice doesn't really make a difference when invoked at the command line. Returningfalse
signals a failed recipe. If any line in a recipe in a makefile returns false, the target stops building andmake
considers the target failed. Here, it's just a way to ensure that make exits immediately after displaying this help line, and as no target was built, a non-zero error code was appropriate.None of it's strictly necessary. It'll work fine without it. One way it could be used is to make
help
your default rule. This will prevent anything from happening, and prompt the user to choose what they want to do.You don't need to. You're not breaking anything. It can prevent name collisions with actual targets if you run into that problem, and when evaluating these rules the implicit rule search is skipped. There's also a whole use case around recursive subcalls of make, but I've never run into that sort of thing. Basically it skips some useless work, which is a performance bump but chances are you're not running in to much of that!
So, you can probably continue to omit them without worrying too, but I don't think it's a terrible habit either. It also serves as documentation to some minimal degree.
Thanks Ben :D
Thanks for this!! I'll be honest I know enough about makefiles to compile a project that's using them, but haven't ever written more than the most basic one myself. And I know they are WAY more powerful than that!
Bookmarked and will read later tonight hopefully!
The problem is that as you get to more and more powerful recipes you get further and further away from anything readable ;)
I have the impression that the more 'powerful' my recipes get, the less
target: dependency
I'm thinking.The real power of
make
is that it converts requirements into target files/dirs. "PHONY" targets are to be shunned as much as possible if you want to make use of this power, since they leave nothing to be required, so they're really at the end of the chain.I don't know if I agree they should be shunned so much as used sparingly. I think they do help organize sets of subtasks, especially ask makefiles grow with many related recipes. I'd much prefer a little extra verbosity to keep my makefile organized and readable.
You're absolutely not alone in that.
But make isn't intended to organize tasks, is it? It's intended to build stuff out of other stuff, honouring the dependencies.
These PHONY targets mostly contain bash scripts as recipes. What would be worse if you create actual bash scripts out of them?
I've lost hours and hours trying to get my code changes into a flash image, thinking that
make image
would do. The makefile authors thought that new developers would be smart enough to understand that not all make targets have explicit dependencies (that would slow the build down). Surprise: I wasn't.That's what I meant: I prefer the targets to be actual Targets, because then the rules I have to remember (and forward to new devs, and document, ...) becomes way smaller. Less room for human errors. And after all, that's what causes the biggest cost for the company.
Thank you for the great article!
I love seeing people rediscover tools which work really well.
If you want to step up some more, you can go to full autotools: It adds release-creation, library detectiion and clean multiplatform resource installation to Makefiles: draketo.de/light/english/free-soft...
I’m working (slowly) on building a tool to make setup of autotools as convenient as modern language environments: bitbucket.org/ArneBab/conf/
Part of that is a Makefile with nice help-output: bitbucket.org/ArneBab/conf/src/314...
aaaargggghghhhh please Cthulhu no!
autotools' library and feature detection is definitely not clean, it's difficult to maintain and hack around the writer's assumptions, and there are much better systems around like CMake and pkg-config. [For the record, I hate CMake, but it's easier to beat a project into shape with it than autotools.]
Where does my bias come from? I've got nearly 30 years experience at this cross-platform feature detection caper if you count automake and before that xmkmf with Imakefiles. Every time I have to hack on an aclocal+friends feature check I come across comments like this: (intltool 0.50.2)
SunOS 4.1.3 - an OS that even in 2014 (when that version of intltool was released) had been obsoleted many years prior. Nobody, however, had bothered to remove that comment or check.
To me, autotools are the promotion of bitrot - and a maintenance nightmare.
Use something better.
/me takes curmudgeon hat off for a bit.
Ah, nice! Thanks for the links, Autotools is next on my list!
If you want something which goes even further, here’s a setup to build a complete book from emacs org-mode via LaTeX using autotools, with references from bibtex, and with index and glossary:
You can even edit with the build environment via ./edit.sh
Get the full repository via hg clone bitbucket.org/ArneBab/1w6/
Nice article, thanks :) We use this to auto document our makefiles (see marmelab.com/blog/2016/02/29/auto-...), I thought you might find it interesting:
The help command show a list of all commands with associated help extracted from the comments (
##
)Aha! Very cool, thanks for sharing. Much better than hardcoding.
Now this what I have always wanted to know. I know what I'm reading tonight 😄
Please let me know if I just make it more confusing!
Nobody ever sat me down and showed me how this tool worked, I kinda had to figure it out piecemeal, so here's hoping this helps someone else.
Who is teaching this stuff anyway, that's why this article is so rare and useful. I have got to the part with the := syntax and trying to visualise what it means. But I think I'm gonna have to try it out. #inspired #unsupportedInlineTags
The docs, i didn't add a link to the toplevel! Here's the link to the part specifically about the two variable types.
That's what I used in the 90s. I like the idea of dependency resolution (if you need to have multiple steps like .cpp -> .o, .o -> .exe, you say to make: I want .exe and make figures out "ok for this I need foo.o and bar.o so I need to do "gcc foo.cpp" and "bar.cpp" and then I can call the linker etc.)
I agree, it's a tool that's extraordinarily well suited to its domain.
It is really nice to see a good "back to basics UNIX hacking tools" article here!
Thanks a lot for detailed information. I am learning golang and trying to write Makefiles there. This is helpful