A normalized number is a number for which both the exponent (including bias) and the most significant bit of the mantissa are non-zero. For such numbers, all the bits of the mantissa contribute to the precision of the representation.
The smallest normalized single-precision floating-point number greater than zero is about 1.1754943-38. Smaller numbers are possible, but those numbers must be represented with a zero exponent and a mantissa whose leading bit(s) are zero, which leads to a loss of precision. These numbers are called denormalized numbers or denormals (newer specifications refer to these as subnormal numbers).
Denormal computations use hardware and/or operating system resources to handle denormals; these can cost hundreds of clock cycles. Denormal computations take much longer to calculate on processors based on IA-32 and Intel® 64 architectures than normal computations.
There are several ways to avoid denormals and increase the performance of your application:
Intel® 64 and IA-32 Architectures Software Developer's Manual, Volume 1: Basic Architecture
Institute of Electrical and Electronics Engineers, Inc*. (IEEE) web site for information about the current floating-point standards and recommendations
Copyright © 1996-2011, Intel Corporation. All rights reserved.