This course will help prepare students for developing code that can process large amounts of data in parallel on Graphics Processing Units (GPUs). It will learn on how to implement software that can solve complex problems with the leading consumer to enterprise-grade GPUs available using Nvidia CUDA. They will focus on the hardware and software capabilities, including the use of 100s to 1000s of threads and various forms of memory.
от партнера
Об этом курсе
Some experience in C/C++ programming
Чему вы научитесь
Students will learn how to utilize the CUDA framework to write C/C++ software that runs on CPUs and Nvidia GPUs.
Students will transform sequential CPU algorithms and programs into CUDA kernels that execute 100s to 1000s of times simultaneously on GPU hardware.
Приобретаемые навыки
- Cuda
- Algorithms
- C/C++
- GPU
- Nvidia
Some experience in C/C++ programming
от партнера

Университет Джонса Хопкинса
The mission of The Johns Hopkins University is to educate its students and cultivate their capacity for life-long learning, to foster independent and original research, and to bring the benefits of discovery to the world.
Программа курса: что вы изучите
Course Overview
The purpose of this module is for students to understand how the course will be run, topics, how they will be assessed, and expectations.
Threads, Blocks and Grids
The single most important concept for using GPUs to solve complex and large-scale problems, is management of threads. CUDA provides two- and three-dimensional logical abstractions of threads, blocks and grids. Students will develop programs that utilize threads, blocks, and grids to process large 2 to 3-dimensional data sets.
Host and Global Memory
To manage the access and modification of data in physical memory effectively, students will need to load data into CPU (host) and GPU (global) general-purpose memory. Students will create software that allocates host memory and transfers it into global memory for use by threads. Students will also learn the capabilities and speeds of these types of memories.
Shared and Constant Memory
To improve performance in GPU software, students will need to utilized mutable (shared) and static (constant) memory. They will use them to apply masks to all items of a data set, to manage the communication between threads, and use for caching in complex programs.
Специализация GPU Programming: общие сведения
This specialization is intended for data scientists and software developers to create software that uses commonly available hardware. Students will be introduced to CUDA and libraries that allow for performing numerous computations in parallel and rapidly. Applications for these skills are machine learning, image/audio signal processing, and data processing.

Часто задаваемые вопросы
Когда я получу доступ к лекциям и заданиям?
Что я получу, оформив подписку на специализацию?
Можно ли получить финансовую помощь?
Can I program on my own desktop/laptop?
Остались вопросы? Посетите Центр поддержки учащихся.