Appentra is pleased to announce the release of Parallelware Trainer 1.2, further improving the provision of accessible HPC and parallel programming training using OpenMP and OpenACC.
Appentra has a clear goal: to make parallel programming easier, enabling everyone to make the best use of parallel computing hardware from the multi-cores in a laptop to the fastest supercomputers. Parallelware Trainer 1.2 provides an enhanced interactive learning environment, including provision for a knowledge base designed around the code being developed and several parallelization paradigms, including multithreading, tasking and offloading to GPUs.
New in Parallelware Trainer 1.2 version
- Tasking support. Loops can now be parallelized not only using OpenMP multithreading and offloading paradigms but also tasking. Two options are offered in case your compiler lacks the latest OpenMP version support: parallelize using OpenMP 4.5 taskloop or OpenMP 3.0 task and taskwait constructs.
- Red markers. Some loops can not be analyzed in order to determine whether they constitute an opportunity for parallelization. An example is when the body of a function called from the loop is not available in the source code. In such cases, a red marker is shown which can be clicked to gain insight into the issues preventing the analysis.
- Environment variables. Custom environment variables can now be set from the project configuration dialog. This has been an often requested feature to ease experimenting with OpenMP and different number of threads, by allowing to set the OMP_NUM_THREADS variable from within Parallelware Trainer.
- Bundled headers. Parallelware Trainer can be used even in the absence of a compiler. However, this most likely means that header files used by the source code will not be available in the system. To be able to parallelize even in such scenarios, Parallelware Trainer now bundles musl libc and OpenMP header files.
- Latest Parallelware technology. Each new release includes the latest version of the Parallelware core technology which is in constant evolution to support more code bases and parallelization features.