Programming with Auto-parallelization

The auto-parallelization feature implements some concepts of OpenMP*, such as the worksharing construct (with the PARALLEL FOR directive). This section provides details on auto-parallelization.

Guidelines for Effective Auto-parallelization Usage

A loop can be parallelized if it meets the following criteria:

The compiler may generate a run-time test for the profitability of executing in parallel for loop with loop parameters that are not compile-time constants.

Coding Guidelines

Enhance the power and effectiveness of the auto-parallelizer by following these coding guidelines:

Auto-parallelization Data Flow

For auto-parallelization processing, the compiler performs the following steps:


  1. Data flow analysis: Computing the flow of data through the program.

  2. Loop classification: Determining loop candidates for parallelization based on correctness and efficiency, as shown by threshold analysis.

  3. Dependency analysis: Computing the dependency analysis for references in each loop nest.

  4. High-level parallelization: Analyzing dependency graph to determine loops which can execute in parallel, and computing run-time dependency.

  5. Data partitioning: Examining data reference and partition based on the following types of access: SHARED, PRIVATE, and FIRSTPRIVATE.

  6. Multithreaded code generation: Modifying loop parameters, generating entry/exit per threaded task, and generating calls to parallel run-time routines for thread creation and synchronization.

Using the -parallel (Linux*) or the /Qparallel (Windows*) option enables parallelization for both Intel® microprocessors and non-Intel microprocessors. The resulting executable may get additional performance gain on Intel microprocessors than on non-Intel microprocessors. The parallelization can also be affected by certain options, such as /arch or /Qx (Windows) or -m or -x (Linux and Mac OS X).

Options that use OpenMP* are available for both Intel® and non-Intel microprocessors but these options may perform additional optimizations on Intel® microprocessors than they perform on non-Intel microprocessors. The list of major, user-visible OpenMP constructs and features that may perform differently on Intel® vs. non-Intel microprocessors includes: locks (internal and user visible), the SINGLE construct, barriers (explicit and implicit), parallel loop scheduling, reductions, memory allocation, and thread affinity and binding.


Submit feedback on this help topic

Copyright © 1996-2011, Intel Corporation. All rights reserved.