fimf-precision, Qimf-precision

Defines the accuracy for math library functions.

IDE Equivalent

None

Architectures

IA-32, Intel® 64 architectures

Syntax

Linux and Mac OS X:

-fimf-precision[=value[:funclist]]

Windows:

/Qimf-precision[:value[:funclist]]

Arguments

value

Is one of the following values denoting the desired accuracy:

high

This is equivalent to max-error = 0.6.

medium

This is equivalent to max-error = 4; this is the default setting if the option is specified and value is omitted.

low

This is equivalent to accuracy-bits = 11 if the value is single precision; accuracy-bits = 26 if the value is double precision.

In the above explanations, max-error means option -fimf-max-error (Linux* OS and Mac OS* X) or /Qimf-max-error (Windows* OS); accuracy-bits means option -fimf-accuracy-bits (Linux* OS and Mac OS* X) or /Qimf-accuracy-bits (Windows* OS).

funclist

Is an optional list of one or more math library functions to which the attribute should be applied. If you specify more than one function, they must be separated with commas.

Default

OFF

The compiler uses default heuristics when calling math library functions.

Description

This option defines the accuracy (precision) for math library functions.

This option can be used to improve run-time performance if reduced accuracy is sufficient for the application, or it can be used to increase the accuracy of math library functions.

In general, using a lower precision can improve run-time performance and using a higher precision may reduce run-time performance.

If option -fimf-precision (Linux* OS and Mac OS* X) or /Qimf-precision (Windows* OS), or option -fimf-max-error (Linux* OS and Mac OS* X) or /Qimf-max-error (Windows* OS), or option -fimf-accuracy-bits (Linux OS and Mac OS* X) or /Qimf-accuracy-bits (Windows OS) is specified, the default value for max-error is determined by that option. If one or more of these options are specified, the default value for max-error is determined by the last one specified on the command line.

If none of these options are specified, the default value for max-error is determined by the setting specified for option -[no-]fast-transcendentals (Linux OS and Mac OS X) or /Qfast-transcendentals[-] (Windows OS). If that option also has not been specified, the default value is determined by the setting of option -fp-model (Linux OS and Mac OS X) or /fp (Windows OS).

Note iconNote

Many routines in libraries LIBM (Math Library) and SVML (Short Vector Math Library) are more highly optimized for Intel® microprocessors than for non-Intel microprocessors.

Alternate Options

None

See Also


Submit feedback on this help topic

Copyright © 1996-2011, Intel Corporation. All rights reserved.