Package Management is an often requested addition to the LFS Book. A Package Manager allows tracking the installation of files making it easy to remove and upgrade packages. As well as the binary and library files, a package manager will handle the installation of configuration files. Before you begin to wonder, NO—this section will not talk about nor recommend any particular package manager. What it provides is a roundup of the more popular techniques and how they work. The perfect package manager for you may be among these techniques or may be a combination of two or more of these techniques. This section briefly mentions issues that may arise when upgrading packages.
Some reasons why no package manager is mentioned in LFS or BLFS include:
Dealing with package management takes the focus away from the goals of these books—teaching how a Linux system is built.
There are multiple solutions for package management, each having its strengths and drawbacks. Including one that satisfies all audiences is difficult.
There are some hints written on the topic of package management. Visit the Hints Project and see if one of them fits your need.
A Package Manager makes it easy to upgrade to newer versions when they are released. Generally the instructions in the LFS and BLFS books can be used to upgrade to the newer versions. Here are some points that you should be aware of when upgrading packages, especially on a running system.
If Linux kernel needs to be upgraded (for example, from 5.10.17 to 5.10.18 or 5.11.1), nothing else need to be rebuilt. The system will keep working fine thanks to the well-defined border between kernel and userspace. Specifically, Linux API headers need not to be (and should not be, see the next item) upgraded alongside the kernel. You'll need to reboot your system to use the upgraded kernel.
If Linux API headers or Glibc needs to be upgraded to a newer version, (e.g. from glibc-2.31 to glibc-2.32), it is safer to rebuild LFS. Though you may be able to rebuild all the packages in their dependency order, we do not recommend it.
                If a package containing a shared library is updated, and if
                the name of the library changes, then any packages
                dynamically linked to the library need to be recompiled in
                order to link against the newer library. (Note that there is
                no correlation between the package version and the name of
                the library.) For example, consider a package foo-1.2.3 that
                installs a shared library with name libfoo.so.1. If you upgrade the package to
                a newer version foo-1.2.4 that installs a shared library with
                name libfoo.so.2. In this case,
                any packages that are dynamically linked to libfoo.so.1 need to be recompiled to link
                against libfoo.so.2 in order to
                use the new library version. You should not remove the
                previous libraries unless all the dependent packages are
                recompiled.
              
                If a package containing a shared library is updated, and the
                name of library doesn't change, but the version number of the
                library file
                decreases (for example, the name of the library is kept named
                libfoo.so.1, but the name of
                library file is changed from libfoo.so.1.25 to libfoo.so.1.24), you should remove the
                library file from the previously installed version
                (libfoo.so.1.25 in the case).
                Or, a ldconfig
                run (by yourself using a command line, or by the installation
                of some package) will reset the symlink libfoo.so.1 to point to the old library
                file because it seems having a “newer”
                version, as its version number is larger. This situation may
                happen if you have to downgrade a package, or the package
                changes the versioning scheme of library files suddenly.
              
                If a package containing a shared library is updated, and the
                name of library doesn't change, but a severe issue
                (especially, a security vulnerability) is fixed, all running
                programs linked to the shared library should be restarted.
                The following command, run as root after updating, will list what is
                using the old versions of those libraries (replace libfoo with the name of the
                library):
              
grep -l  -e 'libfoo.*deleted' /proc/*/maps |
   tr -cd 0-9\\n | xargs -r ps u
              If OpenSSH is being used for accessing the system and it is linked to the updated library, you need to restart sshd service, then logout, login again, and rerun that command to confirm nothing is still using the deleted libraries.
If a binary or a shared library is overwritten, the processes using the code or data in the binary or library may crash. The correct way to update a binary or a shared library without causing the process to crash is to remove it first, then install the new version into position. The install command provided by Coreutils has already implemented this and most packages use it to install binaries and libraries. This means that you won't be troubled by this issue most of the time. However, the install process of some packages (notably Mozilla JS in BLFS) just overwrites the file if it exists and causes a crash, so it's safer to save your work and close unneeded running processes before updating a package.
The following are some common package management techniques. Before making a decision on a package manager, do some research on the various techniques, particularly the drawbacks of the particular scheme.
Yes, this is a package management technique. Some folks do not find the need for a package manager because they know the packages intimately and know what files are installed by each package. Some users also do not need any package management because they plan on rebuilding the entire system when a package is changed.
            This is a simplistic package management that does not need any
            extra package to manage the installations. Each package is
            installed in a separate directory. For example, package foo-1.1
            is installed in /usr/pkg/foo-1.1
            and a symlink is made from /usr/pkg/foo to /usr/pkg/foo-1.1. When installing a new version
            foo-1.2, it is installed in /usr/pkg/foo-1.2 and the previous symlink is
            replaced by a symlink to the new version.
          
            Environment variables such as PATH,
            LD_LIBRARY_PATH, MANPATH, INFOPATH and
            CPPFLAGS need to be expanded to
            include /usr/pkg/foo. For more than
            a few packages, this scheme becomes unmanageable.
          
            This is a variation of the previous package management technique.
            Each package is installed similar to the previous scheme. But
            instead of making the symlink, each file is symlinked into the
            /usr hierarchy. This removes the
            need to expand the environment variables. Though the symlinks can
            be created by the user to automate the creation, many package
            managers have been written using this approach. A few of the
            popular ones include Stow, Epkg, Graft, and Depot.
          
            The installation needs to be faked, so that the package thinks
            that it is installed in /usr though
            in reality it is installed in the /usr/pkg hierarchy. Installing in this manner
            is not usually a trivial task. For example, consider that you are
            installing a package libfoo-1.1. The following instructions may
            not install the package properly:
          
./configure --prefix=/usr/pkg/libfoo/1.1 make make install
            The installation will work, but the dependent packages may not
            link to libfoo as you would expect. If you compile a package that
            links against libfoo, you may notice that it is linked to
            /usr/pkg/libfoo/1.1/lib/libfoo.so.1
            instead of /usr/lib/libfoo.so.1 as
            you would expect. The correct approach is to use the DESTDIR strategy to fake installation of the
            package. This approach works as follows:
          
./configure --prefix=/usr make make DESTDIR=/usr/pkg/libfoo/1.1 install
            Most packages support this approach, but there are some which do
            not. For the non-compliant packages, you may either need to
            manually install the package, or you may find that it is easier
            to install some problematic packages into /opt.
          
In this technique, a file is timestamped before the installation of the package. After the installation, a simple use of the find command with the appropriate options can generate a log of all the files installed after the timestamp file was created. A package manager written with this approach is install-log.
Though this scheme has the advantage of being simple, it has two drawbacks. If, during installation, the files are installed with any timestamp other than the current time, those files will not be tracked by the package manager. Also, this scheme can only be used when one package is installed at a time. The logs are not reliable if two packages are being installed on two different consoles.
In this approach, the commands that the installation scripts perform are recorded. There are two techniques that one can use:
            The LD_PRELOAD environment variable
            can be set to point to a library to be preloaded before
            installation. During installation, this library tracks the
            packages that are being installed by attaching itself to various
            executables such as cp, install, mv and tracking the system
            calls that modify the filesystem. For this approach to work, all
            the executables need to be dynamically linked without the suid or
            sgid bit. Preloading the library may cause some unwanted
            side-effects during installation. Therefore, it is advised that
            one performs some tests to ensure that the package manager does
            not break anything and logs all the appropriate files.
          
The second technique is to use strace, which logs all system calls made during the execution of the installation scripts.
In this scheme, the package installation is faked into a separate tree as described in the Symlink style package management. After the installation, a package archive is created using the installed files. This archive is then used to install the package either on the local machine or can even be used to install the package on other machines.
This approach is used by most of the package managers found in the commercial distributions. Examples of package managers that follow this approach are RPM (which, incidentally, is required by the Linux Standard Base Specification), pkg-utils, Debian's apt, and Gentoo's Portage system. A hint describing how to adopt this style of package management for LFS systems is located at https://www.linuxfromscratch.org/hints/downloads/files/fakeroot.txt.
Creation of package files that include dependency information is complex and is beyond the scope of LFS.
Slackware uses a tar based system for package archives. This system purposely does not handle package dependencies as more complex package managers do. For details of Slackware package management, see http://www.slackbook.org/html/package-management.html.
This scheme, unique to LFS, was devised by Matthias Benkmann, and is available from the Hints Project. In this scheme, each package is installed as a separate user into the standard locations. Files belonging to a package are easily identified by checking the user ID. The features and shortcomings of this approach are too complex to describe in this section. For the details please see the hint at https://www.linuxfromscratch.org/hints/downloads/files/more_control_and_pkg_man.txt.
          One of the advantages of an LFS system is that there are no files
          that depend on the position of files on a disk system. Cloning an
          LFS build to another computer with the same architecture as the
          base system is as simple as using tar on the LFS partition that
          contains the root directory (about 250MB uncompressed for a base
          LFS build), copying that file via network transfer or CD-ROM to the
          new system and expanding it. From that point, a few configuration
          files will have to be changed. Configuration files that may need to
          be updated include: /etc/hosts,
          /etc/fstab, /etc/passwd, /etc/group, /etc/shadow, and /etc/ld.so.conf.
        
A custom kernel may need to be built for the new system depending on differences in system hardware and the original kernel configuration.
![[Note]](../images/note.png) 
          There have been some reports of issues when copying between similar but not identical architectures. For instance, the instruction set for an Intel system is not identical with an AMD processor and later versions of some processors may have instructions that are unavailable in earlier versions.
Finally the new system has to be made bootable via Section 10.4, “Using GRUB to Set Up the Boot Process”.