One of the annoying things about Linux is how hard it is to build binaries that work on all reasonably recent distributions of the operating system. Dynamic linking is favored in Linux for a number of reasons. But it puts the burden on the user to make sure that all dependencies are met, and this can quickly lead to what is popularly called "dependency hell".
EDIT: Addressing a reddit comment.
- What if I'm using LGPL libraries and don't want to open source my code?
You don't have to statically link libraries if you don't want to. What you want to avoid is having the end-user search for and install libraries needed by your application. Drop in the library files your application needs to the same directory as the binary and include $ORIGIN in the rpath when linking. Now, your binary will look for the libraries in the same directory where it is residing rather than just looking for it in default paths such as /usr/lib etc.
- Why not just release the source in a tar-ball?
It's wonderful to be able to see/modify the source code of your application. But, in most cases, users don't want to go to the trouble of compiling your code. People like it when your application "just works", and you'll likely get more users this way.
- What about security issues in out-dated libraries?
This is a judgement call. You probably don't want to statically link SSL libraries so that you can benefit when the system libraries are updated. But, I'd recommend statically linking a library such as SDL2, rather than have the user install that. Ideally, we'd like our software to be long lasting. So, you also have to ask yourself whether the library you are dynamically linking against will religiously maintain ABI compatibility.
- Dynamic linking is harmful
- CDE: Automatically create portable Linux applications
- Portable binaries for Linux games
Yes, package managers such as apt-get and yum are supposed to solve this problem. However, they require the user to have root access, and then there's the extra burden on the developer of making sure that his/her software is available on different package managers used by various distributions. It's hard enough to convince the remaining non-web/mobile application developers to have a Linux port at all because there are way more Windows users than Linux users, but the barrier is a lot higher if developers have to worry about multiple distributions of Linux.
The solution is static linking (static linking is not absolutely necessary. Adding $ORIGIN to the rpath is an alternative). Statically link all dependencies that your program uses (including libstdc++ if you're using C++), with the exception of libc, X11, openGL libraries. Why not statically link in libc as well? You may be able to do that if your application does not dynamically load any library at all. But most applications use dlopen() to load some system library (even if you don't do it yourself, some library you are using like librt does it), and if you've statically linked libc, dlopen() will fail unless the system library exactly matches the version of glibc that you linked in. So, don't statically link libc.
If you compile your code on a modern Linux distro, your application may require a recent version of libc to work and fail when run on an older distribution that hasn't been updated (you can check what version of libc your binary needs with objdump -p ./myBinary and distrowatch can tell you what version of glibc is used in various distributions). So, it's best to compile the application in the oldest distribution you want to support. If you're building something that depends on X Windows, I recommend Debian Etch in a virtual machine. It uses glibc 2.3.6 and Xorg 1.1.1. I ran into trouble when trying to use SDL2 on older versions of Debian because they used XFree86 instead of Xorg. If you're building a command line application, you may be able to get away with an old Linux distribution in a chroot jail instead of a full-fledged virtual machine. Finally, modern 64-bit Linux distributions don't include 32-bit glibc. So, if you want to support 32-bit users, build a 32-bit binary, but don't expect it to run on 64-bit Linuxes out of the box; release a separate 64-bit version so that your 64-bit users don't have to install 32-bit libraries on their machine.
EDIT: Addressing a reddit comment.
- What if I'm using LGPL libraries and don't want to open source my code?
You don't have to statically link libraries if you don't want to. What you want to avoid is having the end-user search for and install libraries needed by your application. Drop in the library files your application needs to the same directory as the binary and include $ORIGIN in the rpath when linking. Now, your binary will look for the libraries in the same directory where it is residing rather than just looking for it in default paths such as /usr/lib etc.
- Why not just release the source in a tar-ball?
It's wonderful to be able to see/modify the source code of your application. But, in most cases, users don't want to go to the trouble of compiling your code. People like it when your application "just works", and you'll likely get more users this way.
- What about security issues in out-dated libraries?
This is a judgement call. You probably don't want to statically link SSL libraries so that you can benefit when the system libraries are updated. But, I'd recommend statically linking a library such as SDL2, rather than have the user install that. Ideally, we'd like our software to be long lasting. So, you also have to ask yourself whether the library you are dynamically linking against will religiously maintain ABI compatibility.
A couple of links on this topic:
- Dynamic linking is harmful
- CDE: Automatically create portable Linux applications
- Portable binaries for Linux games
By far the most reliable way I've found to run old binaries on Linux is ... to run a Windows binary under Wine.
ReplyDeleteWe have the portable all-free-software binary handler, yay! It just has Win32 in the middle ...
Sad but true
Delete