Programming Tradeoffs in Floating-point Applications

In general, the programming objectives for floating-point applications fall into the following categories:

Based on the goal of an application, you will need to make tradeoffs among these objectives. For example, if you are developing a 3D graphics engine, then performance may be the most important factor to consider, and reproducibility and accuracy may be your secondary concerns.

The Intel® Compiler provides appropriate compiler options, such as the -fp-model (Linux* and Mac OS* X operating systems) or /fp (Windows* operating system) option, that allow you to tune your applications based on specific objectives. The compiler optimizes and generates code differently when you specify different compiler options. Take the following code as an example:

REAL(4):: t0, t1, t2

...

t0=t1+t2+4.0+0.1

If you specify the -fp-model extended (Linux* and Mac OS* X) or /fp:extended (Windows*) option in favor of accuracy, the compiler generates the following assembly code:

fld       DWORD PTR _t1

fadd      DWORD PTR _t2

fadd      DWORD PTR _Cnst4.0

fadd      DWORD PTR _Cnst0.1

fstp      DWORD PTR _t0

The above code maximizes accuracy because it utilizes the highest mantissa precision available on the target platform. However, the code might suffer in performance due to the overhead of managing the x87 stack and it might yield results that cannot be reproduced on other platforms that do not have an equivalent extended precision type.

If you specify the -fp-model source (Linux* and Mac OS* X) or /fp:source (Windows*) option in favor of reproducibility and portability, the compiler generates the following assembly code:

movss     xmm0, DWORD PTR _t1

addss     xmm0, DWORD PTR _t2

addss     xmm0, DWORD PTR _Cnst4.0

addss     xmm0, DWORD PTR _Cnst0.1

movss     DWORD PTR _t0, xmm0

The above code maximizes portability by preserving the original order of the computation and by using the well-defined IEEE single-precision type for all computations. It is not as accurate as the previous implementation because the intermediate rounding error is greater compared to extended precision. And it is not the highest performance implementation because it does not take advantage of the opportunity to precompute 4.0 + 0.1.

If you specify the -fp-model fast (Linux* and Mac OS* X) or /fp:fast (Windows*) option in favor of performance, the compiler generates the following assembly code:

movss     xmm0, DWORD PTR _Cnst4.1

addss     xmm0, DWORD PTR _t1

addss     xmm0, DWORD PTR _t2

movss     DWORD PTR _t0, xmm0

The above code maximizes performance by using Intel® SSE instructions and precomputing 4.0 + 0.1. It is not as accurate as the first implementation, again due to greater intermediate rounding error. It will not provide reproducible results like the second implementation because it must reorder the addition in order to precompute 4.0 + 0.1, and you cannot expect that all compilers, on all platforms, at all optimization levels will reorder the addition in the same way.

For most other applications, the considerations may be more complicated. You should select appropriate compiler options by carefully balancing your programming objectives and making tradeoffs among these objectives.