lecture NMNV532 - exercises
Something to read...
Exercises
Lecture 0 - introduction HPC/unix/shell, remote access, etc.
Lecture 1 - introduction to python
ssh r3d3.karlin.mff.cuni.cz
module add python
python
- vector combination (
axpy
) and matrix-vector multiplication (gemv
) operations
Lecture 2 - introduction to parallel programing
- [OpenMP] example in
C
(in ~hron/GIT/nmnv532/lecture2/omp/
)
- MPI example in
C
(in ~hron/GIT/nmnv532/lecture2/mpi/c/
)
- MPI in python using MPI4Py (in
~hron/GIT/nmnv532/lecture2/mpi/python/
)
Lecture 3 - introduction to MPI4Py
- [MPI4Py documentation ↵]
- MPI operations: send/recv, broadcast/reduce, scater/gather
- dot product in parallel
- matrix-vector multiplication in parallel
Lecture 4 - parallel matrix operations in MPI4Py
- matrix distribution by rows/columns and by blocks (cartesian communicator)
- Jacobi method in parallel
Lecture 5 - using PETSc4py
- distributed vectors and matrix objects in PETSc
- ksp objects - linear solvers in PETSc [summary ↵]
Lecture 6 - using Global Arrays library GA4py
- Global Arrays - implementation of Partitioned Global Address Space (PGAS) programming model
- vector, matrix objects using GA4py (in
~hron/GIT/nmnv532/lecture5
)
Lecture 7 - parallel CG iteration
Lecture 8 - dense matrix-matrix multiplication using MPI4py and GA4py
Final test tasks....
- explore weak (e.i. fixed problem size per processor) and strong (e.i. fixed global problem size) scaling for dense matrix-vector multiplication in parallel using MPI4py (lectures 3,4) or GA4py (lecture 7)
- make table or plot of the scaling efficiency vs number of processors see for example here
- instead of matrix-vector operation you can choose any other (vector-vector dot product, matrix-matrix product, etc.)