Methods to Optimize Code Size

This section provides some guidance on how to achieve smaller object and smaller executable size when using the optimizing features of Intel compilers.

There are two compiler options that are designed to prioritize code size over performance:

Os

Favors size over speed

This option enables optimizations that do not increase code size; it produces smaller code size than option O2.

Option Os disables some optimizations that may increase code size for a small speed benefit.

O1

Minimizes code size

Compared to option Os, option O1 disables even more optimizations that are generally known to increase code size. Specifying option O1 implies option Os.

As an intermediate step in reducing code size, you can replace option O3 with option O2 before specifying option O1.

Option O1 may improve performance for applications with very large code size, many branches, and execution time not dominated by code within loops.

For more information about compiler options mentioned in this topic, see their full descriptions in the Compiler Reference.

The rest of this topic briefly discusses other methods that may help you further improve code size even when compared to the default behaviors of options Os and O1.

Things to remember:

Disable or Decrease the Amount of Inlining

Inlining replaces a call to a function with the body of the function. This lets the compiler optimize the code for the inlined function in the context of its caller, usually yielding more specialized and better performing code. This also removes the overhead of calling the function at run-time.

However, replacing a call to a function by the code for that function usually increases code size. The code size increase can be substantial. To eliminate this code size increase, at the cost of the potential performance improvement, inlining can be disabled.

Options to specify:

Linux*: fno-inline
Windows*: Ob0

Advantages:

Disabling or reducing this optimization can reduce code size.

Disadvantages:

Performance is likely to be sacrificed by disabling or reducing inlining especially for applications with many small functions.

Strip Symbols from Your Binaries

You can specify a compiler option to omit debugging and symbol information from the executable without sacrificing its operability.

Options to specify:

Linux*: Wl, --strip-all
Windows*: None

Advantages:

This method noticeably reduces the size of the binary.

Disadvantages:

It may be very difficult to debug a stripped application.

Dynamically Link Intel-Provided Libraries

By default, some of the Intel support and performance libraries are linked statically into an executable. As a result, the library codes are linked into every executable being built. This means that codes are duplicated.

It may be more profitable to link them dynamically.

Options to specify:

Linux*: shared-intel
Windows*: MD

Note

Option MD affects all libraries, not only the Intel-provided ones.

Advantages:

Performance of the resulting executable is normally not significantly affected.

Library codes that are otherwise linked in statically into every executable will not contribute to the code size of each executable with this option. These codes will be shared between all executables using them, and they will be available independent of those executables.

Disadvantages:

The libraries on which the resulting executable depends must be re-distributed with the executable in order for it to work properly.

When libraries are linked statically, only library content that is actually used is linked into the executable. Dynamic libraries, on the other hand, contain all the library content. Therefore, it may not be beneficial to use this option if you only need to build and/or distribute a single executable.

The executable itself may be much smaller when linked dynamically, compared to a statically linked executable. However, the total size of the executable plus shared libraries or DLLs may be much larger than the size of the statically linked executable.

Exclude Unused Code and Data from the Executable

Programs often contain dead code or data that is not used during their execution. Even if no expensive whole-program inter-procedural analysis is made at compile time to identify dead code, there are compiler options you can specify to eliminate unused functions and data at link time.

This method is often referred to as function-level or data-level linking.

Options1 to specify:

Linux*: -fdata-sections -ffunction-sections -Wl,--gc-sections
Windows*: /Gw /Gy /link /OPT:REF

1 These options must all be specified. That is why they are shown as code examples.

In the above code example specifications, these options are passed to the linker:

Advantages:

Only the code that is referenced remains in an executable. Dead functions and data are stripped from the executable.

For the options passed to the linker, they also enable the linker to reorder the sections for other possible optimization.

Disadvantages:

The object codes may become slightly larger because each function or datum is put into a separate section. The overhead is eliminated at the linking stage.

This method requires linker support to strip unused sections.

This method can slightly increase linking time.

Disable Recognition and Expansion of Intrinsic Functions

When recognized, intrinsic functions can get expanded inline or their faster implementation in a library may be assumed and linked in. By default, Inline expansion of intrinsic functions is enabled.

In some cases, disabling this behavior may noticeably improve the size of the produced object or binary.

Options to specify:

Linux*: fno-builtin
Windows*: Oi-

Advantages:

Both the size of the object files and the size of library codes brought into an executable can be reduced.

Disadvantages:

This method can prevent various performance optimizations from happening. Slower standard library implementation will be used.

The size of the final executable can be increased in cases when code pulled in statically from a library for an otherwise inlined intrinsic is large.

Additional information:

Optimize Exception Handling Data

For DPC++, enabling and disabling of exception handling is supported for host compilation.

If a program requires support for exception handling, the compiler creates a special section containing DWARF directives that are used by the Linux* run-time to unwind and catch an exception.

This information is located in the .eh_frame section and may be shrunk using the compiler options listed below.

Options to specify:

Linux* : fno-exceptions
  fno-asynchronous-unwind-tables
Windows*: None

Advantages:

These options may shrink the size of the object or binary file by up to 15%, though the amount of the reduction depends on the target platform.

These options control whether unwind information is precise at an instruction boundary or at a call boundary. For example, option fno-asynchronous-unwind-tables can be used for programs that may only throw or catch exceptions.

Disadvantages:

Both options may change the program's behavior.

Do not use option fno-exceptions for programs that require standard C++ handling for objects of classes with destructors.

Do not use option fno-asynchronous-unwind-tables for functions compiled with option -fexceptions that contain calls to other functions that might throw exceptions or for C++ functions that declare objects with destructors.

Please read the compiler option descriptions, which explain what the defaults and behavior are for each target platform.

Disable Loop Unrolling

Unrolling a loop increases the size of the loop proportionally to the unroll factor.

Disabling (or limiting) this optimization may help reduce code size at the expense of performance.

Options to specify:

Linux*: unroll=0
Windows* (C++ only): Qunroll:0

Advantages:

Code size is reduced.

Disadvantages:

Performance of otherwise unrolled loops may noticeably degrade because this limits other possible loop optimizations.

Additional information:

This option is already the default if you specify option Os or option O1.

Disable Automatic Vectorization

The compiler finds possibilities to use SIMD (SSE/AVX) instructions to improve performance of applications. This optimization is called automatic vectorization.

In most cases, this optimization involves transformation of loops and increases code size, in some cases significantly.

Disabling this optimization may help reduce code size at the expense of performance.

Options to specify:

Linux*: no-vec
Windows*: Qvec-

Advantages:

Compile-time is also improved significantly.

Disadvantages:

Performance of otherwise vectorized loops may suffer significantly. If you care about the performance of your application, you should use this option selectively to suppress vectorization on everything except performance-critical parts.

Additional information:

Depending on code characteristics, this option can sometimes increase binary size.

Avoid References to Compiler-Specific Libraries

While compiler-specific libraries are intended to improve the performance of your application, they increase the size of your binaries.

Certain compiler options may improve the code size.

Options to specify:

Linux*: ffreestanding
Windows* (C++ only): Qfreestanding

Advantages:

The compiler will not assume the presence of compiler-specific libraries. It will generate only calls that appear in the source code.

Disadvantages:

This method may sacrifice performance if the library codes were in hotspots. Also, because we cannot assume any libraries, some compiler optimizations will be suppressed.

Additional information:

Use Interprocedural Optimization

Using interprocedural optimization (IPO) may reduce code size because it enables dead code elimination and suppresses generation of code for functions always inlined or proven never to be called during execution.

Options to specify:

Linux*: ipo
Windows*: Qipo

Advantages:

Depending on the code characteristics, this optimization can reduce executable size and improve performance.

Disadvantages:

Binary size can increase depending on code/application.

Note

This method is not recommended if you plan to ship object files as part of a final product.