We’re happy to announce the release of Parallelware Trainer 1.5 which brings exciting new features to offload computations to GPUs.
- Two offloading to GPU modes are now available for OpenMP. Naive parallelization using offloading consists of using the directive “target parallel for”. However, in order to better map to GPUs architectures, the use of “teams distribute” is encouraged. This is now the new default, having the older option still available for comparison and experimentation.
- Extended OpenMP 4.5 support. The OpenMP support was significantly improved by covering most of version 4.5, with complete support expected for the next release.
- New bundled examples. MATMUL, HEAT, DOTPRODUCT, ATMUX and MANDELBROT extend the available code examples bundled with Parallelware Trainer. PI, DOTPRODUCT and ATMUX are still available and form the quickstart codes.
- Bug fixes and the latest Parallelware technology. Each release includes bug fixes and an updated version of the Parallelware technology which is in constant evolution to provide cutting-edge static code analysis capabilities.
- Customizing Parallelware Trainer through environment variables
- Support resources
- Parallelware Trainer help sheet
Try Parallelware Trainer for free here.
Download Parallelware Trainer and make code parallel today.
Previous release notes: