Automatic vectorization is supported on Intel® 64 (for C++, DPC++, and Fortran) architectures. The information below will guide you in setting up the auto-vectorizer.
Where does the vectorization speedup come from? Consider the following sample code fragment, where a, b and c are integer arrays:
Sample Code Fragment |
---|
|
If vectorization is not enabled, that is, you compile using the O1, -no-vec- (Linux*), or /Qvec- (Windows*) option, for each iteration, the compiler processes the code such that there is a lot of unused space in the SIMD registers, even though each of the registers could hold three additional integers. If vectorization is enabled (compiled using O2 or higher options), the compiler may use the additional registers to perform four additions in a single instruction. The compiler looks for vectorization opportunities whenever you compile at default optimization (O2) or higher.
Using this option enables vectorization at default optimization levels for both Intel® microprocessors and non-Intel microprocessors. Vectorization may call library routines that can result in additional performance gain on Intel® microprocessors than on non-Intel microprocessors.
To get details about the type of loop transformations and optimizations that took place, use the [Q]opt-report-phase option by itself or along with the [Q]opt-report option.
How significant is the performance enhancement? To evaluate performance enhancement yourself, run vec_samples:
Open an Intel® oneAPI DPC++/C++ Compiler command-line window.
On Windows*: Under the Start menu item for your Intel product, select an icon under Intel oneAPI 2021 > Intel oneAPI Command Prompt for oneAPI Compilers.
On Linux*: Source an environment script such as vars.sh in the <installdir> directory and use the attribute appropriate for the architecture.
Navigate to the <installdir>\Samples\<locale>\C++\ (for C++) or <installdir>\Samples\<locale>\DPC++\ (for DPC++) directory. On Windows, unzip the sample project vec_samples.zip to a writable directory. This small application multiplies a vector by a matrix using the following loop:
Example: Vector Matrix Multiplication |
---|
|
Build and run the application, first without enabling auto-vectorization. The default O2 optimization enables vectorization, so you need to disable it with a separate option. Note the time taken for the application to run.
Example: Building and Running an Application without Auto-vectorization |
---|
|
|
Now build and run the application, this time with auto-vectorization. Note the time taken for the application to run.
Example: Building and Running an Application with Auto-vectorization |
---|
|
|
|
When you compare the timing of the two runs, you may see that the vectorized version runs faster. The time for the non-vectorized version is only slightly faster than would be obtained by compiling with the O1 option.
The following do not always prevent vectorization, but frequently either prevent it or cause the compiler to decide that vectorization would not be worthwhile.
Non-contiguous memory access: Four consecutive integers or floating-point values, or two consecutive doubles, may be loaded directly from memory in a single SSE instruction. But if the four integers are not adjacent, they must be loaded separately using multiple instructions, which is considerably less efficient. The most common examples of non-contiguous memory access are loops with non-unit stride or with indirect addressing, as in the examples below. The compiler rarely vectorizes such loops, unless the amount of computational work is large compared to the overhead from non-contiguous memory access.
Example: Non-contiguous Memory Access |
---|
|
The typical message from the vectorization report is: vectorization possible but seems inefficient, although indirect addressing may also result in the following report: Existence of vector dependence.
Data dependencies: Vectorization entails changes in the order of operations within a loop, since each SIMD instruction operates on several data elements at once. Vectorization is only possible if this change of order does not change the results of the calculation.
The simplest case is when data elements that are written (stored to) do not appear in any other iteration of the individual loop. In this case, all the iterations of the original loop are independent of each other, and can be executed in any order, without changing the result. The loop may be safely executed using any parallel method, including vectorization. All the examples considered so far fall into this category.
When a variable is written in one iteration and read in a subsequent iteration, there is a “read-after-write” dependency, also known as a flow dependency, as in this example:
Example: Flow Dependency |
---|
|
So the value of j gets propagated to all A[j]. This cannot safely be vectorized: if the first two iterations are executed simultaneously by a SIMD instruction, the value of A[1] is used by the second iteration before it has been calculated by the first iteration.
When a variable is read in one iteration and written in a subsequent iteration, this is a write-after-read dependency, also known as an anti-dependency, as in the following example:
Example: Write-after-read Dependency |
---|
|
This write-after-read dependency is not safe for general parallel execution, since the iteration with the write may execute before the iteration with the read. However, for vectorization, no iteration with a higher value of j can complete before an iteration with a lower value of j, and so vectorization is safe (that is, it gives the same result as non-vectorized code) in this case. The following example, however, may not be safe, since vectorization might cause some elements of A to be overwritten by the first SIMD instruction before being used for the second SIMD instruction.
Example: Unsafe Vectorization |
---|
|
Read-after-read situations are not really dependencies, and do not prevent vectorization or parallel execution. If a variable is unwritten, it does not matter how often it is read.
Write-after-write, or ‘output’, dependencies, where the same variable is written to in more than one iteration, are in general unsafe for parallel execution, including vectorization.
One important exception, that apparently contains all of the above types of dependency:
Example: Dependency Exception |
---|
|
Although sum is both read and written in every iteration, the compiler recognizes such reduction idioms, and is able to vectorize them safely. The loop in the first example was another example of a reduction, with a loop-invariant array element in place of a scalar.
These types of dependencies between loop iterations are sometimes known as loop-carried dependencies.
The above examples are of proven dependencies. The compiler cannot safely vectorize a loop if there is even a potential dependency. Consider the following example:
Example: Potential Dependency |
---|
|
In the above example, the compiler needs to determine whether, for some iteration i, c[i] might refer to the same memory location as a[i] orb[i] for a different iteration. Such memory locations are sometimes said to be aliased. For example, if a[i] pointed to the same memory location as c[i-1], there would be a read-after-write dependency as in the earlier example. If the compiler cannot exclude this possibility, it will not vectorize the loop unless you provide the compiler with hints.
Sometimes the compiler has insufficient information to decide to vectorize a loop. There are several ways to provide additional information to the compiler:
#pragma ivdep: may be used to tell the compiler that it may safely ignore any potential data dependencies. (The compiler will not ignore proven dependencies). Use of this pragma when there are dependencies may lead to incorrect results.
There are cases where the compiler cannot tell by a static dependency analysis that it is safe to vectorize. Consider the following loop:
Loop Example |
---|
|
Without more information, a vectorizing compiler must conservatively assume that the memory regions accessed by the pointer variables cp_a and cp_b may (partially) overlap, which gives rise to potential data dependencies that prohibit straightforward conversion of this loop into SIMD instructions. At this point, the compiler may decide to keep the loop serial or, as done by the Intel® oneAPI DPC++/C++ Compiler, generate a run-time test for overlap, where the loop in the true-branch can be converted into SIMD instructions:
Example: True-branch Loop |
---|
|
Run-time data-dependency testing provides a generally effective way to exploit implicit parallelism in C or C++ code at the expense of a slight increase in code size and testing overhead. If the function copy is only used in specific ways, however, you can assist the vectorizing compiler as follows:
Example: Ignoring Data Dependencies with #pragma ivdep |
---|
|
You can also use the restrict keyword.
You may use the restrict keyword in the declarations of cp_a and cp_b, as shown below, to inform the compiler that each pointer variable provides exclusive access to a certain memory region. The restrict qualifier in the argument list lets the compiler know that there are no other aliases to the memory to which the pointers point. In other words, the pointer for which it is used provides the only means of accessing the memory in question in the scope in which the pointers live. Even if the code gets vectorized without the restrict keyword, the compiler checks for aliasing at run-time, if the restrict keyword was used.
Example: Restrict Keyword |
---|
|
This method is convenient in case the exclusive access property holds for pointer variables that are used in a large portion of code with many loops because it avoids the need to annotate each of the vectorizable loops individually. Note, however, that both the loop-specific #pragma ivdep hint, as well as the pointer variable-specific restrict hint must be used with care because incorrect usage may change the semantics intended in the original program.
Another example is the following loop that may also not get vectorized because of a potential aliasing problem between pointers a, b and c:
Example: Potential Unsupported Loop Structure |
---|
|
If the restrict keyword is added to the parameters, the compiler will trust you, that you will not access the memory in question with any other pointer and vectorize the code properly:
Example: Using Pointers with the Restrict Keyword |
---|
|
The down-side of using restrict is that not all compilers support this keyword, so your source code may lose portability.