glfs/introduction/important/building-notes.xml
2021-04-23 19:00:10 +08:00

785 lines
35 KiB
XML

<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE sect1 PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN"
"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
<!ENTITY % general-entities SYSTEM "../../general.ent">
%general-entities;
]>
<sect1 id="unpacking">
<?dbhtml filename="notes-on-building.html"?>
<sect1info>
<date>$Date$</date>
</sect1info>
<title>Notes on Building Software</title>
<para>Those people who have built an LFS system may be aware
of the general principles of downloading and unpacking software. Some
of that information is repeated here for those new to building
their own software.</para>
<para>Each set of installation instructions contains a URL from which you
can download the package. The patches; however, are stored on the LFS
servers and are available via HTTP. These are referenced as needed in the
installation instructions.</para>
<para>While you can keep the source files anywhere you like, we assume that
you have unpacked the package and changed into the directory created by the
unpacking process (the 'build' directory). We also assume you have
uncompressed any required patches and they are in the directory immediately
above the 'build' directory.</para>
<para>We can not emphasize strongly enough that you should start from a
<emphasis>clean source tree</emphasis> each time. This means that if
you have had an error during configuration or compilation, it's usually
best to delete the source tree and
re-unpack it <emphasis>before</emphasis> trying again. This obviously
doesn't apply if you're an advanced user used to hacking
<filename>Makefile</filename>s and C code, but if in doubt, start from a
clean tree.</para>
<sect2>
<title>Building Software as an Unprivileged (non-root) User</title>
<para>The golden rule of Unix System Administration is to use your
superpowers only when necessary. Hence, BLFS recommends that you
build software as an unprivileged user and only become the
<systemitem class='username'>root</systemitem> user when installing the
software. This philosophy is followed in all the packages in this book.
Unless otherwise specified, all instructions should be executed as an
unprivileged user. The book will advise you on instructions that need
<systemitem class='username'>root</systemitem> privileges.</para>
</sect2>
<sect2>
<title>Unpacking the Software</title>
<para>If a file is in <filename class='extension'>.tar</filename> format
and compressed, it is unpacked by running one of the following
commands:</para>
<screen><userinput>tar -xvf filename.tar.gz
tar -xvf filename.tgz
tar -xvf filename.tar.Z
tar -xvf filename.tar.bz2</userinput></screen>
<note>
<para>You may omit using the <option>v</option> parameter in the commands
shown above and below if you wish to suppress the verbose listing of all
the files in the archive as they are extracted. This can help speed up the
extraction as well as make any errors produced during the extraction
more obvious to you.</para>
</note>
<para>You can also use a slightly different method:</para>
<screen><userinput>bzcat filename.tar.bz2 | tar -xv</userinput></screen>
<para>Finally, you sometimes need to be able to unpack patches which are
generally not in <filename class='extension'>.tar</filename> format. The
best way to do this is to copy the patch file to the parent of the 'build'
directory and then run one of the following commands depending on whether
the file is a <filename class='extension'>.gz</filename> or <filename
class='extension'>.bz2</filename> file:</para>
<screen><userinput>gunzip -v patchname.gz
bunzip2 -v patchname.bz2</userinput></screen>
</sect2>
<sect2>
<title>Verifying File Integrity Using 'md5sum'</title>
<para>Generally, to verify that the downloaded file is genuine and complete,
many package maintainers also distribute md5sums of the files. To verify the
md5sum of the downloaded files, download both the file and the
corresponding md5sum file to the same directory (preferably from different
on-line locations), and (assuming <filename>file.md5sum</filename> is the
md5sum file downloaded) run the following command:</para>
<screen><userinput>md5sum -c file.md5sum</userinput></screen>
<para>If there are any errors, they will be reported. Note that the BLFS
book includes md5sums for all the source files also. To use the BLFS
supplied md5sums, you can create a <filename>file.md5sum</filename> (place
the md5sum data and the exact name of the downloaded file on the same
line of a file, separated by white space) and run the command shown above.
Alternately, simply run the command shown below and compare the output
to the md5sum data shown in the BLFS book.</para>
<screen><userinput>md5sum <replaceable>&lt;name_of_downloaded_file&gt;</replaceable></userinput></screen>
</sect2>
<sect2>
<title>Creating Log Files During Installation</title>
<para>For larger packages, it is convenient to create log files instead of
staring at the screen hoping to catch a particular error or warning. Log
files are also useful for debugging and keeping records. The following
command allows you to create an installation log. Replace
<replaceable>&lt;command&gt;</replaceable> with the command you intend to execute.</para>
<screen><userinput>( <replaceable>&lt;command&gt;</replaceable> 2&gt;&amp;1 | tee compile.log &amp;&amp; exit $PIPESTATUS )</userinput></screen>
<para><option>2&gt;&amp;1</option> redirects error messages to the same
location as standard output. The <command>tee</command> command allows
viewing of the output while logging the results to a file. The parentheses
around the command run the entire command in a subshell and finally the
<command>exit $PIPESTATUS</command> command ensures the result of the
<replaceable>&lt;command&gt;</replaceable> is returned as the result and not the
result of the <command>tee</command> command.</para>
</sect2>
<sect2 id="parallel-builds" xreflabel="Using Multiple Processors">
<title>Using Multiple Processors</title>
<para>For many modern systems with multiple processors (or cores) the
compilation time for a package can be reduced by performing a "parallel
make" by either setting an environment variable or telling the make program
how many processors are available. For instance, a Core2Duo can support two
simultaneous processes with: </para>
<screen><userinput>export MAKEFLAGS='-j2'</userinput></screen>
<para>or just building with:</para>
<screen><userinput>make -j2</userinput></screen>
<para>Generally the number of processes should not exceed the number of
cores supported by the CPU. To list the processors on your
system, issue: <userinput>grep processor /proc/cpuinfo</userinput>.
</para>
<para>In some cases, using multiple processors may result in a 'race'
condition where the success of the build depends on the order of the
commands run by the <command>make</command> program. For instance, if an
executable needs File A and File B, attempting to link the program before
one of the dependent components is available will result in a failure.
This condition usually arises because the upstream developer has not
properly designated all the prerequisites needed to accomplish a step in the
Makefile.</para>
<para>If this occurs, the best way to proceed is to drop back to a
single processor build. Adding '-j1' to a make command will override
the similar setting in the MAKEFLAGS environment variable.</para>
<note><para>When running the package tests or the install portion of the
package build process, we do not recommend using an option greater than
'-j1' unless specified otherwise. The installation procedures or checks
have not been validated using parallel procedures and may fail with issues
that are difficult to debug.</para></note>
</sect2>
<sect2 id="automating-builds" xreflabel="Automated Building Procedures">
<title>Automated Building Procedures</title>
<para>There are times when automating the building of a package can come in
handy. Everyone has their own reasons for wanting to automate building,
and everyone goes about it in their own way. Creating
<filename>Makefile</filename>s, <application>Bash</application> scripts,
<application>Perl</application> scripts or simply a list of commands used
to cut and paste are just some of the methods you can use to automate
building BLFS packages. Detailing how and providing examples of the many
ways you can automate the building of packages is beyond the scope of this
section. This section will expose you to using file redirection and the
<command>yes</command> command to help provide ideas on how to automate
your builds.</para>
<bridgehead renderas="sect3">File Redirection to Automate Input</bridgehead>
<para>You will find times throughout your BLFS journey when you will come
across a package that has a command prompting you for information. This
information might be configuration details, a directory path, or a response
to a license agreement. This can present a challenge to automate the
building of that package. Occasionally, you will be prompted for different
information in a series of questions. One method to automate this type of
scenario requires putting the desired responses in a file and using
redirection so that the program uses the data in the file as the answers to
the questions.</para>
<para>Building the <application>CUPS</application> package is a good
example of how redirecting a file as input to prompts can help you automate
the build. If you run the test suite, you are asked to respond to a series
of questions regarding the type of test to run and if you have any
auxiliary programs the test can use. You can create a file with your
responses, one response per line, and use a command similar to the
one shown below to automate running the test suite:</para>
<screen><userinput>make check &lt; ../cups-1.1.23-testsuite_parms</userinput></screen>
<para>This effectively makes the test suite use the responses in the file
as the input to the questions. Occasionally you may end up doing a bit of
trial and error determining the exact format of your input file for some
things, but once figured out and documented you can use this to automate
building the package.</para>
<bridgehead renderas="sect3">Using <command>yes</command> to Automate
Input</bridgehead>
<para>Sometimes you will only need to provide one response, or provide the
same response to many prompts. For these instances, the
<command>yes</command> command works really well. The
<command>yes</command> command can be used to provide a response (the same
one) to one or more instances of questions. It can be used to simulate
pressing just the <keycap>Enter</keycap> key, entering the
<keycap>Y</keycap> key or entering a string of text. Perhaps the easiest
way to show its use is in an example.</para>
<para>First, create a short <application>Bash</application> script by
entering the following commands:</para>
<screen><userinput>cat &gt; blfs-yes-test1 &lt;&lt; "EOF"
<literal>#!/bin/bash
echo -n -e "\n\nPlease type something (or nothing) and press Enter ---> "
read A_STRING
if test "$A_STRING" = ""; then A_STRING="Just the Enter key was pressed"
else A_STRING="You entered '$A_STRING'"
fi
echo -e "\n\n$A_STRING\n\n"</literal>
EOF
chmod 755 blfs-yes-test1</userinput></screen>
<para>Now run the script by issuing <command>./blfs-yes-test1</command> from
the command line. It will wait for a response, which can be anything (or
nothing) followed by the <keycap>Enter</keycap> key. After entering
something, the result will be echoed to the screen. Now use the
<command>yes</command> command to automate the entering of a
response:</para>
<screen><userinput>yes | ./blfs-yes-test1</userinput></screen>
<para>Notice that piping <command>yes</command> by itself to the script
results in <keycap>y</keycap> being passed to the script. Now try it with a
string of text:</para>
<screen><userinput>yes 'This is some text' | ./blfs-yes-test1</userinput></screen>
<para>The exact string was used as the response to the script. Finally,
try it using an empty (null) string:</para>
<screen><userinput>yes '' | ./blfs-yes-test1</userinput></screen>
<para>Notice this results in passing just the press of the
<keycap>Enter</keycap> key to the script. This is useful for times when the
default answer to the prompt is sufficient. This syntax is used in the
<xref linkend="net-tools-automate-example"/> instructions to accept all the
defaults to the many prompts during the configuration step. You may now
remove the test script, if desired.</para>
<bridgehead renderas="sect3">File Redirection to Automate Output</bridgehead>
<para>In order to automate the building of some packages, especially those
that require you to read a license agreement one page at a time, requires
using a method that avoids having to press a key to display each page.
Redirecting the output to a file can be used in these instances to assist
with the automation. The previous section on this page touched on creating
log files of the build output. The redirection method shown there used the
<command>tee</command> command to redirect output to a file while also
displaying the output to the screen. Here, the output will only be sent to
a file.</para>
<para>Again, the easiest way to demonstrate the technique is to show an
example. First, issue the command:</para>
<screen><userinput>ls -l /usr/bin | more</userinput></screen>
<para>Of course, you'll be required to view the output one page at a time
because the <command>more</command> filter was used. Now try the same
command, but this time redirect the output to a file. The special file
<filename>/dev/null</filename> can be used instead of the filename shown,
but you will have no log file to examine:</para>
<screen><userinput>ls -l /usr/bin | more &gt; redirect_test.log 2&gt;&amp;1</userinput></screen>
<para>Notice that this time the command immediately returned to the shell
prompt without having to page through the output. You may now remove the
log file.</para>
<para>The last example will use the <command>yes</command> command in
combination with output redirection to bypass having to page through the
output and then provide a <keycap>y</keycap> to a prompt. This technique
could be used in instances when otherwise you would have to page through
the output of a file (such as a license agreement) and then answer the
question of <quote>do you accept the above?</quote>. For this example,
another short <application>Bash</application> script is required:</para>
<screen><userinput>cat &gt; blfs-yes-test2 &lt;&lt; "EOF"
<literal>#!/bin/bash
ls -l /usr/bin | more
echo -n -e "\n\nDid you enjoy reading this? (y,n) "
read A_STRING
if test "$A_STRING" = "y"; then A_STRING="You entered the 'y' key"
else A_STRING="You did NOT enter the 'y' key"
fi
echo -e "\n\n$A_STRING\n\n"</literal>
EOF
chmod 755 blfs-yes-test2</userinput></screen>
<para>This script can be used to simulate a program that requires you to
read a license agreement, then respond appropriately to accept the
agreement before the program will install anything. First, run the script
without any automation techniques by issuing
<command>./blfs-yes-test2</command>.</para>
<para>Now issue the following command which uses two automation techniques,
making it suitable for use in an automated build script:</para>
<screen><userinput>yes | ./blfs-yes-test2 &gt; blfs-yes-test2.log 2&gt;&amp;1</userinput></screen>
<para>If desired, issue <command>tail blfs-yes-test2.log</command> to see
the end of the paged output, and confirmation that <keycap>y</keycap> was
passed through to the script. Once satisfied that it works as it should,
you may remove the script and log file.</para>
<para>Finally, keep in mind that there are many ways to automate and/or
script the build commands. There is not a single <quote>correct</quote> way
to do it. Your imagination is the only limit.</para>
</sect2>
<sect2>
<title>Dependencies</title>
<para>For each package described, BLFS lists the known dependencies.
These are listed under several headings, whose meaning is as follows:</para>
<itemizedlist>
<listitem>
<para><emphasis>Required</emphasis> means that the target package
cannot be correctly built without the dependency having first been
installed.</para>
</listitem>
<listitem>
<para><emphasis>Recommended</emphasis> means that BLFS strongly
suggests this package is installed first for a clean and trouble-free
build, that won't have issues either during the build process, or at
run-time. The instructions in the book assume these packages are
installed. Some changes or workarounds may be required if these
packages are not installed.</para>
</listitem>
<listitem>
<para><emphasis>Optional</emphasis> means that this package might be
installed for added functionality. Often BLFS will describe the
dependency to explain the added functionality that will result.</para>
</listitem>
</itemizedlist>
</sect2>
<sect2 id="package_updates">
<title>Using the Most Current Package Sources</title>
<para>On occasion you may run into a situation in the book when a package
will not build or work properly. Though the Editors attempt to ensure
that every package in the book builds and works properly, sometimes a
package has been overlooked or was not tested with this particular version
of BLFS.</para>
<para>If you discover that a package will not build or work properly, you
should see if there is a more current version of the package. Typically
this means you go to the maintainer's web site and download the most current
tarball and attempt to build the package. If you cannot determine the
maintainer's web site by looking at the download URLs, use Google and query
the package's name. For example, in the Google search bar type:
'package_name download' (omit the quotes) or something similar. Sometimes
typing: 'package_name home page' will result in you finding the
maintainer's web site.</para>
</sect2>
<sect2 id="stripping">
<title>Stripping One More Time</title>
<para>
In LFS, stripping of debugging symbols was discussed a couple of
times. When building BLFS packages, there are generally no special
instructions that discuss stripping again. It is probably not a good
idea to strip an executable or a library while it is in use, so exiting
any windowing environment is a good idea. Then you can do:
</para>
<screen><userinput>find /{,usr/}{bin,lib,sbin} \
-type f \( -name \*.so* -a ! -name \*dbg \) \
-exec strip --strip-unneeded {} \;</userinput></screen>
<para>
If you install programs in other directories such as <filename
class="directory">/opt</filename> or <filename
class="directory">/usr/local</filename>, you may want to strip the files
there too.
</para>
<para>
For more information on stripping, see <ulink
url="http://www.technovelty.org/linux/stripping-shared-libraries.html"/>.
</para>
</sect2>
<!--
<sect2 id="libtool">
<title>Libtool files</title>
<para>
One of the side effects of packages that use Autotools, including
libtool, is that they create many files with an .la extension. These
files are not needed in an LFS environment. If there are conflicts with
pkgconfig entries, they can actually prevent successful builds. You
may want to consider removing these files periodically:
</para>
<screen><userinput>find /lib /usr/lib -not -path "*Image*" -a -name \*.la -delete</userinput></screen>
<para>
The above command removes all .la files with the exception of those that
have <quote>Image</quote> or <quote>openldap</quote> as a part of the
path. These .la files are used by the ImageMagick and openldap programs,
respectively. There may be other exceptions by packages not in BLFS.
</para>
</sect2>
-->
<sect2 id="buildsystems">
<title>Working with different build systems</title>
<para>
There are now three different build systems in common use for
converting C or C++ source code into compiled programs or
libraries and their details (particularly, finding out about available
options and their default values) differ. It may be easiest to understand
the issues caused by some choices (typically slow execution or
unexpected use of, or omission of, optimizatons) by starting with
the CFLAGS and CXXFLAGS environment variables. There are also some
programs which use rust.
</para>
<para>
Most LFS and BLFS builders are probably aware of the basics of CFLAGS
and CXXFLAGS for altering how a program is compiled. Typically, some
form of optimization is used by upstream developers (-O2 or -O3),
sometimes with the creation of debug symbols (-g), as defaults.
</para>
<para>
If there are contradictory flags (e.g. multiple different -O values),
the <emphasis>last</emphasis> value will be used. Sometimes this means
that flags specified in environment variables will be picked up before
values hardcoded in the Makefile, and therefore ignored. For example,
where a user specifies '-O2' and that is followed by '-O3' the build will
use '-O3'.
</para>
<para>
There are various other things which can be passed in CFLAGS or
CXXFLAGS, such as forcing compilation for a specific microarchitecture
(e.g. -march=amdfam10, -march=native) or specifying a specific standard
for C or C++ (-std=c++17 for example). But one thing which has now come
to light is that programmers might include debug assertions in their
code, expecting them to be disabled in releases by using -DNDEBUG.
Specifically, if <xref linkend="mesa"/> is built with these assertions
enabled, some activities such as loading levels of games can take
extremely long times, even on high-class video cards.
</para>
<bridgehead renderas="sect3" id="autotools-info">Autotools with Make</bridgehead>
<para>
This combination is often described as 'CMMI' (configure, make, make
install) and is used here to also cover the few packages which have a
configure script that is not generated by autotools.
</para>
<para>
Sometimes running <command>./configure --help</command> will produce
useful options about switches which might be used. At other times,
after looking at the output from configure you may need to look
at the details of the script to find out what it was actually searching
for.
</para>
<para>
Many configure scripts will pick up any CFLAGS or CXXFLAGS from the
environment, but CMMI packages vary about how these will be mixed with
any flags which would otherwise be used (<emphasis>variously</emphasis>:
ignored, used to replace the programmer's suggestion, used before the
programmer's suggestion, or used after the programmer's suggestion).
</para>
<para>
In most CMMI packages, running 'make' will list each command and run
it, interspersed with any warnings. But some packages try to be 'silent'
and only show which file they are compiling or linking instead of showing
the command line. If you need to inspect the command, either because of
an error, or just to see what options and flags are being used, adding
'V=1' to the make invocation may help.
</para>
<bridgehead renderas="sect3" id="cmake-info">CMake</bridgehead>
<para>
CMake works in a very different way, and it has two backends which can
be used on BLFS: 'make' and 'ninja'. The default backend is make, but
ninja can be faster on large packages with multiple processors. To
use ninja, specify '-G Ninja' in the cmake command. However, there are
some packages which create fatal errors in their ninja files but build
successfully using the default of Unix Makefiles.
</para>
<para>
The hardest part of using CMake is knowing what options you might wish
to specify. The only way to get a list of what the package knows about
is to run <command>cmake -LAH</command> and look at the output for that
default configuration.
</para>
<para>
Perhaps the most-important thing about CMake is that it has a variety
of CMAKE_BUILD_TYPE values, and these affect the flags. The default
is that this is not set and no flags are generated. Any CFLAGS or
CXXFLAGS in the environment will be used. If the programmer has coded
any debug assertions, those will be enabled unless -DNDEBUG is used.
The following CMAKE_BUILD_TYPE values will generate the flags shown,
and these will come <emphasis>after</emphasis> any flags in the
environment and therefore take precedence.
</para>
<itemizedlist>
<listitem>
<para>Debug : '-g'</para>
</listitem>
<listitem>
<para>Release : '-O3 -DNDEBUG'</para>
</listitem>
<listitem>
<para>RelWithDebInfo : '-O2 -g -DNDEBUG'</para>
</listitem>
<listitem>
<para>MinSizeRel : '-Os -DNDEBUG'</para>
</listitem>
</itemizedlist>
<para>
CMake tries to produce quiet builds. To see the details of the commands
which are being run, use 'make VERBOSE=1' or 'ninja -v'.
</para>
<bridgehead renderas="sect3" id="meson-info">Meson</bridgehead>
<para>
Meson has some similarities to CMake, but many differences. To get
details of the defines that you may wish to change you can look at
<filename>meson_options.txt</filename> which is usually in the
top-level directory.
</para>
<para>
If you have already configured the package by running
<command>meson</command> and now wish to change one or more settings,
you can either remove the build directory, recreate it, and use the
altered options, or within the build directory run <command>meson
configure</command>, e.g. to set an option:
</para>
<screen><userinput>meson configure -D&lt;some_option&gt;=true</userinput></screen>
<para>
If you do that, the file <filename>meson-private/cmd_line.txt</filename>
will show the <emphasis>last</emphasis> commands which were used.
</para>
<para>
Meson provides the following buildtype values, and the flags they enable
come <emphasis>after</emphasis> any flags supplied in the environment and
therefore take precedence.
</para>
<itemizedlist>
<listitem>
<para>plain : no added flags. This is for distributors to supply their
own CLFAGS, CXXFLAGS and LDFLAGS. There is no obvious reason to use
this in BLFS.</para>
</listitem>
<listitem>
<para>debug : '-g'</para>
</listitem>
<listitem>
<para>debugoptimized : '-O2 -g' - this is the default if nothing is
specified, it leaves assertions enabled.</para>
</listitem>
<listitem>
<para>release : '-O3 -DNDEBUG' (but occasionally a package will force
-O2 here)</para>
</listitem>
</itemizedlist>
<para>
Although the 'release' buildtype is described as enabling -DNDEBUG, and all
CMake Release builds pass that, it has so far only been observed (in
verbose builds) for <xref linkend="mesa"/>. That suggests that it might
only be used when there are debug assertions present.
</para>
<para>
The -DNDEBUG flag can also be provided by passing
<command>-Db_ndebug=true</command>.
</para>
<para>
To see the details of the commands which are being run in a package using
meson, use 'ninja -v'.
</para>
<bridgehead renderas="sect3" id="rust-info">Rustc and Cargo</bridgehead>
<para>
Most released rustc programs are provided as crates (source tarballs)
which will query a server to check current versions of dependencies
and then download them as necessary. These packages are built using
<command>cargo --release</command>. In theory, you can manipulate the
RUSTFLAGS to change the optimize-level (default is 3, like -O3, e.g.
<literal>-Copt-level=3</literal>) or to force it to build for the
machine it is being compiled on, using
<literal>-Ctarget-cpu=native</literal> but in practice this seems to
make no significant difference.
</para>
<para>
If you find an interesting rustc program which is only provided as
unpackaged source, you should at least specify
<literal>RUSTFLAGS=-Copt-level=2</literal> otherwise it will do an
unoptimized compile with debug info and run <emphasis>much</emphasis>
slower.
</para>
<para>
The rust developers seem to assume that everyone will compile on a
machine dedicated to producing builds, so by default all CPUs are used.
This can often be worked around, either by exporting
CARGO_BUILD_JOBS=&lt;N&gt; or passing --jobs &lt;N&gt; to cargo. For
compiling rustc itself, specifying --jobs &lt;N&gt; on invocations of
x.py (together with the <envar>CARGO_BUILD_JOBS</envar> environment
variable, which looks like a "belt and braces" approach but seems to be
necessary) mostly works. The exception is running the tests when building
rustc, some of them will nevertheless use all online CPUs, at least as of
rustc-1.42.0.
</para>
</sect2>
<sect2 id="optimizations">
<title>Optimizing the build</title>
<para>
Many people will prefer to optimize compiles as they see fit, by providing
CFLAGS or CXXFLAGS. For an introduction to the options available with gcc
and g++ see <ulink
url="https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html"/> and <ulink
url="https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html"/>
and <command>info gcc</command>.
</para>
<para>
Some packages default to '-O2 -g', others to '-O3 -g', and if CFLAGS or
CXXFLAGS are supplied they might be added to the package's defaults,
replace the package's defaults, or even be ignored. There are details
on some desktop packages which were mostly current in April 2019 at
<ulink url="https://www.linuxfromscratch.org/~ken/tuning/"/> - in
particular, README.txt, tuning-1-packages-and-notes.txt, and
tuning-notes-2B.txt. The particular thing to remember is that if you
want to try some of the more interesting flags you may need to force
verbose builds to confirm what is being used.
</para>
<para>
Clearly, if you are optimizing your own program you can spend time to
profile it and perhaps recode some of it if it is too slow. But for
building a whole system that approach is impractical. In general,
-O3 usually produces faster programs than -O2. Specifying
-march=native is also beneficial, but means that you cannot move the
binaries to an incompatible machine - this can also apply to newer
machines, not just to older machines. For example programs compiled for
'amdfam10' run on old Phenoms, Kaveris, and Ryzens : but programs
compiled for a Kaveri will not run on a Ryzen because certain op-codes
are not present. Similarly, if you build for a Haswell not everything
will run on a SandyBridge.
</para>
<para>
There are also various other options which some people claim are
beneficial. At worst, you get to recompile and test, and then
discover that in your usage the options do not provide a benefit.
</para>
<para>
If building Perl or Python modules, or Qt packages which use qmake,
in general the CFLAGS and CXXFLAGS used are those which were used by
those 'parent' packages.
</para>
</sect2>
<sect2 id="hardening">
<title>Options for hardening the build</title>
<para>
Even on desktop systems, there are still a lot of exploitable
vulnerabilities. For many of these, the attack comes via javascript
in a browser. Often, a series of vulnerabilities are used to gain
access to data (or sometimes to pwn, i.e. own, the machine and
install rootkits). Most commercial distros will apply various
hardening measures.
</para>
<para>
For hardening options which are reasonably cheap, there is some
discussion in the 'tuning' link above (occasionally, one or more
of these options might be inappropriate for a package). These
options are -D_FORTIFY_SOURCE=2, -fstack-protector=strong, and
(for C++) -D_GLIBCXX_ASSERTIONS. On modern machines these should
only have a little impact on how fast things run, and often they
will not be noticeable.
</para>
<para>
In the past, there was Hardened LFS where gcc (a much older version)
was forced to use hardening (with options to turn some of it off on a
per-package basis. What is being covered here is different - first you
have to make sure that the package is indeed using your added flags and
not over-riding them.
</para>
<para>
The main distros use much more, such as RELRO (Relocation Read Only)
and perhaps -fstack-clash-protection. You may also encounter the
so-called 'userspace retpoline' (-mindirect-branch=thunk etc.) which
is the equivalent of the spectre mitigations applied to the linux
kernel in late 2018). The kernel mitigations caused a lot of complaints
about lost performance, if you have a production server you might wish
to consider testing that, along with the other available options, to
see if performance is still sufficient.
</para>
<para>
Whilst gcc has many hardening options, clang/LLVM's strengths lie
elsewhere. Some options which gcc provides are said to be less effective
in clang/LLVM.
</para>
</sect2>
</sect1>