Intel - xCluster Studio
The Intel Cluster Toolkit includes Intel Trace Analyzer and Collector, Intel Math Kernel Library (Intel MKL), Intel MPI Library and Intel MPI Benchmarks for developing, analysing and optimising performance of parallel applications for clusters using IA-32, IA-64 and Intel 64 architectures on Linux or Microsoft Windows. Develop, analyse and optimise the performance of parallel applications for clusters using Intel 32- and 64-bit architectures.
Intel Cluster Toolkit 2011 provides a basic package of Intel cluster tools that help users develop, analyse, and improve performance for MPI applications on Linux and Windows OS based HPC clusters.
The Intel Cluster Toolkit license provides access and support for the following programs on either Windows or Linux:
- Intel MPI Library 4.0 Update 1: provides new levels of performance and flexibility for applications that execute on clusters of Intel platforms. The library achieves these advantages by improved interconnect support, faster on-node messaging, and an application tuning capability that adjusts to the cluster architecture and application structure.
This library features multirail InfiniBand (IB) support, and enhancements to the native IB layer for lower communication latencies
- Intel Trace Analyzer and Collector 8.0 Update 1: is enhanced with new features that accelerate the analysis and tuning cycle of MPI-based cluster applications and enables programmers to analyze the effect of advanced interconnects on application performance. The Intel Trace Analyzer and Collector 8.0 Update 1 load imbalance diagram and the ideal interconnect simulator help MPI programmers identify further optimisation opportunities.
- Intel Math Kernel Library 10.3 Update 3: is a library of highly optimised, extensively threaded math routines for science, engineering, and financial applications that require maximum performance. Core math functions include BLAS, LAPACK, ScaLAPACK, Sparse Solvers, Fast Fourier Transforms, Cluster FFTs, andVector Math.
- Intel MPI Benchmarks 3.2.2: provides the following:
- Support of large message buffers greater than 2 gigabytes for some MPI collective benchmarks (e.g., Allgather, Alltoall, Gather, and Scatter) so as to support large core counts
- New Intel MPI Benchmark executable command-line options “-include/-exclude” to better control execution of the benchmarks list
- New benchmarks PingPongSpecificSource and PingPingSpecificSource. The exact destination rank is used for these tests instead of MPI_ ANY_SOURCE as in the PingPong and PingPing tests.