Funding, News and Announcements
» Go to news mainACENET
Message from ACENET
Join us for our spring training! For a full listing of our training available, see our .泭All ACENET sessions are online unless otherwise indicated.
Introductory Programming: Unix Shell, Git and Python
1, 8, 15, 22 May, 1300-1630hrs Atlantic | 1330-1700hrs NL
This is a beginner level series that is hands-on, covering the fundamentals of Unix Shell, Version Control with Git and Python. This includes data types, conditional statements, loops and functions, as well as program design, version control, data management, and task automation. Participants will be encouraged to help one another and to apply what they have learned to their own research problems. The goal is to teach the practical knowledge needed to start programming, debugging and using Python in everyday tasks. Prerequisites: no previous knowledge of the tools that will be presented, nor previous programming experience, but intermediate level experience with a computer is highly recommended.泭
Introductory Programming in R (In-person only)
6, 7 May, 0900-1300hrs Atlantic
St. FX University
This in-person only session will start with basic R syntax and the R Studio notebook interface. Then, well teach you how to import CSV files, the structure of data frames, how to deal with factors, how to add/remove rows and columns, how to create functions with R, loops with R, how to calculate summary statistics from a data frame, and a brief introduction to plotting. The last lesson demonstrates how to run R programs using command-line interfaces, which will be useful in the domain of high-performance computing. The principles, techniques, and examples covered in the workshop are applicable to a wide variety of research areas including sciences, arts, and business. No previous experience with digital tools or programming is required for this workshop.泭
Basics of Computers
9 May, 1300-1500hrs Atlantic | 1330-1530hrs NL
Most of us have experience using a computer, whether for school, work, or entertainment, but how many of us have actually had an expert teach us how to use it? This talk won't teach you how to troubleshoot everything, but will give you insight to how media, programs and data are encoded and used by computers, so you can make more sense of why computers behave the ways they do, and solve some of your problems with greater efficiency and less frustration. We will provide an approachable overview of how a computer works, by both looking at their history and breaking one down to explain individual components, before highlighting some of the trade-offs to consider when buying a computer. We will provide practical, simple, and actionable advice on digital security and show you a few "pro tips" on how to make the most of your workstation, phone, or whatever device you happen to use. Whether you have a lot or a little experience using your digital technology, if you want to learn how to use your devices more effectively, this workshop is for you!
Introduction to High Performance Computing (HPC)
14 May, 1000-1130hrs Atlantic | 1030-1200hrs NL
What is High Performance Computing (HPC) and what can it do for me? How can ACENET help? Used by researchers across many disciplines to tackle analyses too large or complex for a desktop, or to achieve improved efficiency over a desktop, this session takes participants through the preliminary stages of learning about high performance computing (HPC) and computing clusters, and how to get started with this type of computing. It then reviews software packages available for applications, data analysis, software development and compiling code. Finally, participants will be introduced to the concept of parallel computing to achieve much faster results in analysis. This session is designed for those with no prior experience in HPC, and are looking for an introduction and overview.泭
Introduction to Linux
15 May, 1000-1130hrs Atlantic / 1030-1200hrs NL
Linux is the terminal interface used to enable you to use supercomputing clusters from your desktop. It's the tool you need to get your data on the clusters, run your programs, and get your data back. In this session, learn how to create and navigate directories for your data, load files, manage your storage, run programs on the compute clusters, and set file permissions. This workshop is designed for those with no prior experience in working with a terminal interface.泭
Introduction to Shell Scripting
16 May, 1000-1130hrs Atlantic / 1030-1200hrs NL
Shell scripting helps you save time, automate file management tasks, and better use the power of Linux. This session teaches you how to name, locate and set permissions for executable files, taking input and producing output. You will learn about job scripts, shell variables and looping commands. Prerequisite: completion of ACENET Basic Series Introduction to Linux, or previous experience with Linux.泭
Job Scheduling with Slurm
17 May, 1000-1130hrs Atlantic / 1030-1200hrs NL
The national systems use a job scheduler called Slurm. In this session you will learn how Slurm works and how it allocates jobs, helping you to: minimize wait time by framing reasonable requests; ask for only the resources you need to improve efficiency; increase throughput; run more jobs simultaneously; and troubleshoot and address crashes. This workshop is designed for new HPC users, or for experienced users either transitioning to Slurm or seeking to improve efficiency with the scheduler. Prerequisites: completion of Introduction to Linux and Introduction to Shell Scripting, or prior experience with both.泭
Overview of Parallel Computing
21, 23 May, 1400-1600hrs / 1430-1630hrs NL
Parallel computing is the business of breaking a large problem into tens, hundreds, or even thousands of smaller problems which can then be solved at the same time using a cluster of computers, or supercomputer. It can reduce processing time to a fraction of what it would be on a desktop or workstation, or enable you to tackle larger, more complex problems. Its widely used in big data mining, AI, time-critical simulations, and advanced graphics such as augmented or virtual reality. Its used in fields as diverse as genetics, biotech, GIS, computational fluid dynamics, medical imaging, drug discovery, and agriculture. Prerequisites: comfort in using the command line, able to understand a shell script and make simple edits to it, and knowledge of at least one programming language enough to be able to write a simple program.泭
Dask
4, 6 June, 1400-1600hrs Atlantic / 1430-1630hrs NL
Python is a popular language because it is easy to create programs quickly with simple syntax and a batteries included philosophy. However, there are some drawbacks to the language. It is notoriously difficult to parallelize because of a component called the global interpreter lock, and Python programs typically take many times longer to run than compiled languages such as Fortran, C, and C++, making Python less popular for creating performance-critical programs. Dask was developed to address the first problem of parallelism by constructing task graphs that can be processed using a variety of parallelization and hardware configurations. The second problem of performance can be addressed by converting performance-critical parts into a compiled language such as C/C++ nearly automatically using Cython. Together Cython and Dask can be used to gain greater performance and parallelism of Python programs.泭Prerequisites: completion of Overview of Parallel Computing; familiarity with Python programming.泭
OpenMP
11, 13 June, 1400-1600hrs Atlantic / 1430-1630hrs NL
This session will introduce parallel programming using OpenMP. Shared memory multicore systems are commonly programmed using OpenMP. It has been extensively adopted in the supercomputing world and is gaining attention in general purpose computing as well. OpenMP facilitates parallel programming by providing cross-platform and cross-compiler support. Although OpenMP does not parallelize code automatically, existing code can be parallelized without having to rewrite it significantly. By using compiler directives, C, C++, and Fortran programmers can fully control parallelization. In addition to CPU parallel programming, modern OpenMP has GPU offloading capabilities. Compared to native GPU languages, such as CUDA, OpenMP makes GPU programming easier and more performance-portable. Furthermore, OpenMP supports heterogeneous computations using CPU and GPU resources simultaneously to improve application performance. Prerequisites: completion of Overview of Parallel Computing; familiarity with C, C++ or Fortran.泭
Message Passing Interface (MPI)
18, 20 June 1400-1600hrs Atlantic / 1430-1630hrs NL
Parallel programming techniques will be introduced as a natural extension to sequential programming using the message-passing interface (MPI). MPI allows usage of more than one computer/machine to solve a single problem by facilitating communication among these several computers over the network. By breaking up a problem into smaller chunks, each machine can work on a smaller subset of the bigger problem. Moreover, MPI also provides a way for different machines to communicate and synchronize the results as and when needed. MPI has been very successfully used with compiler-based languages such as C, Fortran and C++. Prerequisites: completion of Overview of Parallel Computing; familiarity with C, C++ or Fortran.泭
Graphics Processing Units (GPUs)
25, 27 June 1400-1600hrs Atlantic / 1430-1630hrs NL
Graphics Processing Units (GPUs) are capable of speeding up many computational workloads by offloading computationally expensive tasks that can be solved in parallel to these processors. This introduction to CUDA programming discusses the architectural differences between CPUs and GPUs and their influence on performance. We will transform serial CPU algorithms that are written in C/C++ into CUDA kernels that can efficiently use the parallel architecture of the GPUs and explore how they can be optimized. Prerequisites: completion of Overview of Parallel Computing; familiarity with C, C++ or Fortran.泭
More training sessions can be viewed from our partners at , , and the .泭