Introductory Course on High Performance Computing and Scientific Computing

High performance computing (HPC) is taking a fundamental role in scientific research around the world. Here at the University of Saskatchewan, the research computing group is making efforts to provide access to these new resources and technology to our researchers.  As part of these efforts, we designed an delivered an introductory course intended for those graduate students, PDFs and researchers who want to start using HPC resources to speed up their researches.

The first offering of our course ran on September 18 and 19.  During 12 hours, the 25 attendees (mostly graduate students and some staff members) reviewed the basic commands and concepts of a Linux system, and the characteristics of the different HPC systems that we have at the University (Plato, Zeno and Meton). They learned how to use efficiently these systems and which kind of problems fit better to each one: shared memory, distributed memory, GPU clusters, and large memory systems. Finally, they analyzed and solved practical examples using the systems (for more details see the course outline below).

Originally we designed the course with an aim to bring new users with no experience or little experience with Linux systems and HPC servers, to a point where they must be able to deploy HPC applications on the most suitable system depending on the problem at hand. The contents of the course could be customized, however, for future offerings according to the attendees backgrounds and interests. We are willing to offer the course again soon, with next offering potentially in January, as long as there are people interested. So, please do not hesitate to contact us at research_computing@usask.ca with your questions and comments if you are interested in attending.

Original course outline:

1. Introduction to scientific computing
2. Linux basics
3. Compiled and interpreted languages
4. High performance computers
5. Running serial codes in a server
6. Running parallel code in distributed memory systems
7. Running parallel code in shared memory systems
8. Running code with GPU acceleration

Leave a Reply