主页 > 行业资讯 > NVIDIA introduces the latest version of the compiler for heterogeneous supercomputing
NVIDIA today announced the release of the 17.7 Edition PGI 2017 Compiler and Tools to help high-performance computing systems developers develop higher-performing software for systems equipped with multi-core CPUs and heterogeneous GPU accelerators, while dramatically simplifying program design Process.
The release of the PGI 17.7 compiler and tools for the key features include:
Support for Tesla V100 GPUs: PGI OpenACC and CUDA Fortran now support the new NVIDIA Volta GV100 GPU, providing more memory bandwidth, streaming multiprocessors, next-generation NVIDIA NVLink and new microarchitecture capabilities for piling up better performance and Programmable function.
OpenACC supports CUDA integrated memory: PGI 17.7 compiler can use CUDA integrated memory, simplify the GPU acceleration system for the compiler process. By opening the Easy Compiler option, OpenACC configures the storage location of the data in the CUDA consolidated memory without writing the data movement code or instructions.
OpenMP 4.5 with initial support for multi-core CPUs: Initial support for OpenMP 4.5 syntax and functionality allows you to program OpenMP 4.5 parallel processing for most multicore CPU systems. The target block (TARGET) in the program sets the multicore system as a target after the preset support condition is set, and the program loop such as PARALLEL and DISTRIBUTE can be distributed to all OpenMP threads for parallel processing.
Deep copy of the Fortran syntax derived categories: OpenACC instructions allow you to move aggregated or deep nested Fortran material objects between the CPU master and the GPU component memory, including the patrol and management of the index object.
C ++ language improvements: PGI 17.7 The C ++ compiler contains many of the successive C ++ 17 features and is aggregated into the scope of the CUDA 9.0 NVCC master compiler. In the LCALS loop processing performance measurement indicators in the average performance increased by 20%.
Using the C ++ 14 Lambdas function in the OpenACC program section: The lambdas function of the C ++ language provides a convenient way to define an object when an anonymous function object is called or passed as a parameter. Beginning with PGI version 17.7, the OpenACC operations section of the C ++ language program supports the lambdas function, which includes generating the corresponding code for different programming models or platforms. C ++ 14 introduces more lambdas function usage, especially the multi-type lambdas function. These functions can be used in the OpenACC program.
Interworking with the cuSOLVER library: By using the interface module provided by PGI and PGI 17.7 built-in PGI version of the cuSOLVER library, you can optimize the language including CUDA Fortran, OpenACC Fortran, C, and C ++. Function.
Support for NVIDIA Tesla GPUs and Multicore CPU Processing PGI Unified Binary: Programs compiled with OpenACC not only support GPU acceleration but also parallel processing on multi-core CPUs. When running on a GPU-powered system, the OpenACC support section is partitioned and the program is executed on the GPU. When running on a system that does not have a GPU installed, the program section of OpenACC can be distributed to all CPU cores on the system.
New analysis capabilities support CUDA integrated memory and OpenACC: PGI 17.7 Profiler has joined a number of new OpenACC analysis capabilities, including support for connected GPU and unconnected GPU dual multi-core CPU platform, in addition to adding a new summary (summary) View function, can show the processing time for each OpenACC code structure. The new CUDA integrated memory function tracks the code for each CPU paging error, as well as the location of the data involved in these code, and supports the new CUDA integrated memory paging frequent replacement, throttling, far End-related events, NVLink, and many other features.
Other features and enhancements to PGI 17.7 include all support for a full range of platform environment modules, prefabricated top open source libraries and programs, and a new series of "parallel operations using OpenACC".