Zeno reaches campus

What does the computer game “Quake” have to do with cutting edge research at the University of Saskatchewan? Both take advantage of Graphical Processing Units (GPUs)—and the University’s newest supercomputer, Zeno, has plenty of GPUs and CPUs to help research get done at superspeed.


GPUs are dedicated hardware in a computer, developed to enhance the speed and quality of computer graphics.  GPUs as we know them today have been around for close to 20 years now.  Back in the mid ‘90s, Quake was the killer game of the year, and GPU cards were just becoming commercially available.  There was a stunning upgrade in visuals from the blocky 320×200 pixel default resolution to 640×480 when moving to a GPU-“accelerated” version.  The quotation marks around “accelerated” are there because while the resolution was much improved, the frame rate was pretty choppy!    The idea, however, of offloading some of the graphics processing from the CPU onto a specialized piece of hardware—the GPU—definitely had legs, and virtually every personal computer you can buy today includes a GPU for increasing graphics performance in both games and productivity software.
GPU technology has come a long way since they first appeared in computers.  A lot of effort has gone into developing screamingly fast GPUs, driven in large part by demand for bigger, better, and fancier computer games.  GPUs are highly specialized–they can only do a small fraction of what a general purpose CPU can, but they do so extremely quickly.   They trade breadth of applicability over all computations for raw speed on a restricted set of calculations.

Somewhere along the way, people got the idea of using GPUs for something other than making graphics go faster.  Using GPUs to accelerate scientific computing was the natural next step for researchers.  Because there are a number of types of computer code that are particularly amenable  to acceleration with GPUs, the acceleration can be substantial: some segments of code can be made to run five to 10 times–or even 100 times– faster on a GPU than on general purpose CPUs.  For research problems requiring a lot of computational power, and specifically those that can take advantage of GPUs, the “time to science” can be reduced dramatically with a moderate upfront investment of time into GPU computation.

In order to enable U of S researchers and faculty to teach classes, train highly qualified personnel, and perform research using GPUs, Information and Communications Technology (ICT) recently (Summer-Fall 2012) unveiled Zeno.usask.ca.

Zeno is an 8-computer, High Performance Compute (HPC) cluster, with a GPU installed in each node.

We’ve installed the most widely adopted library for parallel computations on GPUs, CUDA, across the entire cluster.  This fall, U of S researchers began using Zeno and its GPUs to run GPU-enabled programs and to work on developing their own custom GPU-enabled code.  Zeno has about 100 computing cores PLUS the GPUs, so it has a lot of compute power compared to any desktop computer.  Better yet, Zeno’s design was based on a WestGrid supercomputer, Parallel.westgrid.ca, so if you need even more computational power, this similarity in configuration should give you an easy transition to the national High Performance Computing resources provided by WestGrid and Compute Canada.

If you’re interested in getting access to Zeno, please contact Jason Hlady at hpc_consult@usask.ca.  He’d be happy to talk about what you can do with Zeno or one of the other pieces of HPC infrastructure here at the U of S.

In coming weeks on the research blog, we’ll talk a bit about the Infiniband high performance interconnect on Zeno that can make some software run even faster.

For a fun and not-so-deep extra look at General Purpose GPU computing (GPGPU), check out:

http://gizmodo.com/5252545/giz-explains-gpgpu-computing-and-why-itll-melt-your-face-off


Tech Box: Zeno
Zeno.usask.ca: 8 node (computer) cluster
Each of the 8 nodes has:

  • 12 cores (2x hex-core Xeon E5649 CPUs)
  • 24GB RAM
  • 120GB SATA hard drive
  • 1 x NVIDIA Tesla M2075 6GB GPU
  • CUDA GPU parallel libraries
and all the nodes are connected to each other by 4xQDR Infiniband interconnect.

Leave a Reply