Parallel computing

Course info:

Semester: 4

General Foundation

ECTS: 6

Hours per week: 3

Professor: T.B.D.

Teaching style: Face to face

Grading: Written exams (70%), Programming assignments (30%)

Activity Workload
Lectures 26
Tutorials 13
Programming Assignments 43
Independent study 68
Course total 150

Learning Results

The course aims to introduce students to the basic concepts of parallel computing, topology of parallel machine interconnection networks, parallel computing patterns, the design and implementation of parallel algorithms in shared and distributed memory environments, and the programming of parallel machines.

Upon successful completion of the course, the student will be able to:

  • Understand the basic concepts of parallel computing
  • Realize the particular problems that arise in programming of parallel machines.
  • Understand the parallel computing models and become familiar with the “parallel thinking” required for designing efficient parallel algorithms.
  • Apply the basic parallelization techniques in a shared memory environment and design efficient shared memory algorithms.
  • Apply the basic parallelization techniques for distributed memory environments and design efficient distributed memory algorithms.
  • Understand and further apply the basic principles of parallel programming in shared and distributed memory environments.

Skills acquired

  • Retrieve, analyse and synthesise data and information by utilising necessary technologies
  • Decision-Making
  • Work independently / Teamwork
  • Advance of free, creative and inductive thinking
  • Adapt to new situations
  • Work in an interdisciplinary environment
  • Generate new research ideas
  • Introductory concepts in parallel computing.
  • Parallel machines architectures. Interconnection network topologies.
  • Parallel computing patterns. Time complexity issues. Brent’s principle.
  • Basic techniques for designing parallel algorithms for shared and distributed memory systems.
  • Parallel algorithms for specific problems in a shared memory environment (sorting and merging algorithms, prefix calculation, list calculations, etc.).
  • The message passing model and basic parallel algorithms for distributed memory environments (sorting, linear algebra, linear systems solving etc.).
  • Introduction to parallel programming in shared and distributed memory environments using Pthreads and MPI.
  1. A. Grama, G. Karypis, V. Kumar, A. Gupta, Introduction to Parallel Computing, 2nd Edition, Addison-Wesley, 2003.
  2. Peter S. Pacheco and Matthew Malensek, An Introduction to Parallel Programming, 3rd edition, Morgan Kaufmann, 2021.
  3. Wilkinson B., Allen M., Parallel Programming – Techniques and Applications Using Networked Work stations and Parallel Computers, Pearson, Prentice Hall, 2006.
  4. M. Quinn, “Parallel Programming in C with MPI and OpenMP”, Mc Graw Hill, 2003.
  5. S. Rajasekaran and J. Reif, “Handbook of Parallel Computing: Models, Algorithms and Applications”, Chapman and Hall/CRC, 2007.
  6. F.T. Leighton, “Introduction to Parallel Algorithms and Architectures: Arrays, Trees, Hypercubes”, Morgan Kaufmann, San Mateo, CA, 1992.
  7. J. JáJá, “An Introduction to Parallel Algorithms”, Addison‐Wesley, 1992.
  8. Α. Gibbons and W. Rytter, “Efficient Parallel Algorithms”, Cambridge University Press, Cambridge, 1990.
  9. Lawrence Livermore National Laboratories Pthreads Tutorial, https://hpc-tutorials.llnl.gov/posix/
  10. Lawrence Livermore National Laboratories MPI Tutorial, https://hpc-tutorials.llnl.gov/mpi/

Related scientific journals:

  1. Transactions on Parallel Computing, ACM
  2. Journal of Parallel and Distributed Computing, Elsevier
  3. Transactions on Parallel and Distributed Systems, IEEE
  4. International Journal of Parallel Programming, Springer
Learning Results - Skills acquired

Learning Results

The course aims to introduce students to the basic concepts of parallel computing, topology of parallel machine interconnection networks, parallel computing patterns, the design and implementation of parallel algorithms in shared and distributed memory environments, and the programming of parallel machines.

Upon successful completion of the course, the student will be able to:

  • Understand the basic concepts of parallel computing
  • Realize the particular problems that arise in programming of parallel machines.
  • Understand the parallel computing models and become familiar with the “parallel thinking” required for designing efficient parallel algorithms.
  • Apply the basic parallelization techniques in a shared memory environment and design efficient shared memory algorithms.
  • Apply the basic parallelization techniques for distributed memory environments and design efficient distributed memory algorithms.
  • Understand and further apply the basic principles of parallel programming in shared and distributed memory environments.

Skills acquired

  • Retrieve, analyse and synthesise data and information by utilising necessary technologies
  • Decision-Making
  • Work independently / Teamwork
  • Advance of free, creative and inductive thinking
  • Adapt to new situations
  • Work in an interdisciplinary environment
  • Generate new research ideas
Course content
  • Introductory concepts in parallel computing.
  • Parallel machines architectures. Interconnection network topologies.
  • Parallel computing patterns. Time complexity issues. Brent’s principle.
  • Basic techniques for designing parallel algorithms for shared and distributed memory systems.
  • Parallel algorithms for specific problems in a shared memory environment (sorting and merging algorithms, prefix calculation, list calculations, etc.).
  • The message passing model and basic parallel algorithms for distributed memory environments (sorting, linear algebra, linear systems solving etc.).
  • Introduction to parallel programming in shared and distributed memory environments using Pthreads and MPI.
Recommended bibliography
  1. A. Grama, G. Karypis, V. Kumar, A. Gupta, Introduction to Parallel Computing, 2nd Edition, Addison-Wesley, 2003.
  2. Peter S. Pacheco and Matthew Malensek, An Introduction to Parallel Programming, 3rd edition, Morgan Kaufmann, 2021.
  3. Wilkinson B., Allen M., Parallel Programming – Techniques and Applications Using Networked Work stations and Parallel Computers, Pearson, Prentice Hall, 2006.
  4. M. Quinn, “Parallel Programming in C with MPI and OpenMP”, Mc Graw Hill, 2003.
  5. S. Rajasekaran and J. Reif, “Handbook of Parallel Computing: Models, Algorithms and Applications”, Chapman and Hall/CRC, 2007.
  6. F.T. Leighton, “Introduction to Parallel Algorithms and Architectures: Arrays, Trees, Hypercubes”, Morgan Kaufmann, San Mateo, CA, 1992.
  7. J. JáJá, “An Introduction to Parallel Algorithms”, Addison‐Wesley, 1992.
  8. Α. Gibbons and W. Rytter, “Efficient Parallel Algorithms”, Cambridge University Press, Cambridge, 1990.
  9. Lawrence Livermore National Laboratories Pthreads Tutorial, https://hpc-tutorials.llnl.gov/posix/
  10. Lawrence Livermore National Laboratories MPI Tutorial, https://hpc-tutorials.llnl.gov/mpi/

Related scientific journals:

  1. Transactions on Parallel Computing, ACM
  2. Journal of Parallel and Distributed Computing, Elsevier
  3. Transactions on Parallel and Distributed Systems, IEEE
  4. International Journal of Parallel Programming, Springer