Function Annotations and the SIMD Directive for Vectorization

This topic presents specific C++ language features that better help to vectorize code. The SIMD vectorization feature is available for both Intel® microprocessors and non-Intel microprocessors. Vectorization may call library routines that can result in additional performance gain on Intel microprocessors than on non-Intel microprocessors. The vectorization can also be affected by certain options, such as /arch or /Qx (Windows) or -m or -x (Linux and Mac OS X).

The __declspec(align(n)) declaration enables you to overcome hardware alignment constraints. The restrict qualifier and the auto-vectorization hints address the stylistic issues due to lexical scope, data dependency, and ambiguity resolution. The SIMD feature's pragma allows you to enforce vectorization of loops.

The __attribute__(vector) and the __attribute__(vector(clauses)) declarations can be used to vectorize user-defined functions and loops. For SIMD usage, the vector function is called from a loop that is being vectorized. The function must be implemented in vector operations as part of the loop.

The C/C++ extensions for array notations' map operations can be defined to provide general data parallel semantics, where you do not express the implementation strategy. Using array notations, you can write the same operation regardless of the size of the problem, and let the implementation use the right construct, combining SIMD, loops, and tasking to implement the operation. With these semantics, you can choose more elaborate programming and express a single dimensional operation at two levels, using both task constructs and array operations to force a preferred parallel and vector execution.

The usage model of the vector declaration is that the code generated for the function actually takes a small section (vectorlength or vectorlengthfor) of the array, by value, and exploits SIMD parallelism, whereas the implementation of task parallelism is done at the call site.

The following table summarizes the language features that help vectorize code.

Feature

Description

__declspec(align(n))

Directs the compiler to align the variable to an n-byte boundary. Address of the variable is address mod n=0.

__declspec(align(n,off))

Directs the compiler to align the variable to an n-byte boundary with offset off within each n-byte boundary. Address of the variable is address mod n=off.

__attribute__(vector)

Combines with the map operation at the call site to provide the data parallel semantics. When multiple instances of the vector declaration are invoked in a parallel context, the execution order among them is not sequenced.

__attribute__(vector(clauses))

Combines with the map operation at the call site to provide the data parallel semantics with the following values for clauses:

  • processor clause: processor(cpuid)

  • vector length clause: vectorlength(n)

  • vector length for clause: vectorlenghtfor(datatype)

  • linear clause: linear(param1:step1 [, param2:step2]…)

  • scalar clause: scalar(param [, param,]…)

  • mask clause: [no]mask

When multiple instances of the vector declaration are invoked in a parallel context, the execution order among them is not sequenced.

restrict

Permits the disambiguator flexibility in alias assumptions, which enables more vectorization.

__assume_aligned(a,n)

Instructs the compiler to assume that array a is aligned on an n-byte boundary; used in cases where the compiler has failed to obtain alignment information.

Auto-vectorization Hints

#pragma ivdep

Instructs the compiler to ignore assumed vector dependencies.

#pragma vector
{aligned|unaligned|always|temporal|nontemporal}

Specifies how to vectorize the loop and indicates that efficiency heuristics should be ignored. Using the assert keyword with the vector {always}pragma generates an error-level assertion message saying that the compiler efficiency heuristics indicate that the loop cannot be vectorized. Use #pragma ivdep to ignore the assumed dependencies.

#pragma novector

Specifies that the loop should never be vectorized.

User-mandated pragma

#pragma simd

Enforces vectorization of loops.

See Also



Submit feedback on this help topic

Copyright © 1996-2011, Intel Corporation. All rights reserved.