ROCm for Windows Training Course
ROCm is an open-source GPU programming platform that supports AMD GPUs and offers compatibility with CUDA and OpenCL. It provides developers with direct access to hardware details, allowing complete control over parallelization. However, this requires a solid understanding of device architecture, memory models, execution models, and optimization techniques.
The recent introduction of ROCm for Windows allows users to install and utilize ROCm on the Windows operating system, which is prevalent in both personal and professional settings. This enables users to harness the power of AMD GPUs for applications such as artificial intelligence, gaming, graphics rendering, and scientific computing.
This instructor-led live training (available online or onsite) is designed for beginner to intermediate developers who want to install and use ROCm on Windows to program AMD GPUs and leverage their parallel processing capabilities.
Upon completion of this training, participants will be able to:
- Establish a development environment featuring the ROCm Platform, an AMD GPU, and Visual Studio Code on Windows.
- Develop a fundamental ROCm program that executes vector addition on the GPU and retrieves results from GPU memory.
- Utilize the ROCm API to query device information, allocate and deallocate device memory, transfer data between host and device, launch kernels, and synchronize threads.
- Write GPU-executing kernels and manipulate data using the HIP language.
- Employ HIP built-in functions, variables, and libraries for common tasks and operations.
- Leverage ROCm and HIP memory spaces—such as global, shared, constant, and local—to optimize data transfers and memory access.
- Control the threads, blocks, and grids that define parallelism using ROCm and HIP execution models.
- Debug and test ROCm and HIP programs using tools like ROCm Debugger and ROCm Profiler.
- Optimize ROCm and HIP programs through techniques such as coalescing, caching, prefetching, and profiling.
Course Format
- Interactive lectures and discussions.
- Extensive exercises and practice opportunities.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request customized training for this course, please contact us to arrange.
Course Outline
Introduction
- What is ROCm?
- What is HIP?
- ROCm vs CUDA vs OpenCL
- Overview of ROCm and HIP features and architecture
- ROCm for Windows vs ROCm for Linux
Installation
- Installing ROCm on Windows
- Verifying the installation and checking device compatibility
- Updating or uninstalling ROCm on Windows
- Troubleshooting common installation issues
Getting Started
- Creating a new ROCm project using Visual Studio Code on Windows
- Exploring the project structure and files
- Compiling and running the program
- Displaying the output using printf and fprintf
ROCm API
- Using ROCm API in the host program
- Querying device information and capabilities
- Allocating and deallocating device memory
- Copying data between host and device
- Launching kernels and synchronizing threads
- Handling errors and exceptions
HIP Language
- Using HIP language in the device program
- Writing kernels that execute on the GPU and manipulate data
- Using data types, qualifiers, operators, and expressions
- Using built-in functions, variables, and libraries
ROCm and HIP Memory Model
- Using different memory spaces, such as global, shared, constant, and local
- Using different memory objects, such as pointers, arrays, textures, and surfaces
- Using different memory access modes, such as read-only, write-only, read-write, etc.
- Using memory consistency models and synchronization mechanisms
ROCm and HIP Execution Model
- Using different execution models, such as threads, blocks, and grids
- Using thread functions, such as hipThreadIdx_x, hipBlockIdx_x, hipBlockDim_x, etc.
- Using block functions, such as __syncthreads, __threadfence_block, etc.
- Using grid functions, such as hipGridDim_x, hipGridSync, cooperative groups, etc.
Debugging
- Debugging ROCm and HIP programs on Windows
- Using Visual Studio Code debugger to inspect variables, breakpoints, call stack, etc.
- Using ROCm Debugger to debug ROCm and HIP programs on AMD devices
- Using ROCm Profiler to analyze ROCm and HIP programs on AMD devices
Optimization
- Optimizing ROCm and HIP programs on Windows
- Using coalescing techniques to improve memory throughput
- Using caching and prefetching techniques to reduce memory latency
- Using shared memory and local memory techniques to optimize memory accesses and bandwidth
- Using profiling and profiling tools to measure and improve execution time and resource utilization
Summary and Next Steps
Requirements
- Understanding of the C/C++ language and parallel programming concepts.
- Basic knowledge of computer architecture and memory hierarchy.
- Experience with command-line tools and code editors.
- Familiarity with the Windows operating system and PowerShell.
Audience
- Developers looking to learn how to install and use ROCm on Windows to program AMD GPUs and exploit their parallelism.
- Developers aiming to write high-performance, scalable code that runs across various AMD devices.
- Programmers interested in exploring the low-level aspects of GPU programming and optimizing code performance.
Open Training Courses require 5+ participants.
ROCm for Windows Training Course - Booking
ROCm for Windows Training Course - Enquiry
ROCm for Windows - Consultancy Enquiry
Upcoming Courses
Related Courses
Developing AI Applications with Huawei Ascend and CANN
21 HoursHuawei Ascend comprises a series of AI processors engineered for superior inference and training performance.
This instructor-led training session, available both online and onsite, targets intermediate AI engineers and data scientists aiming to create and refine neural network models utilizing Huawei’s Ascend platform alongside the CANN toolkit.
Upon completion of this program, participants will gain the ability to:
- Establish and configure the CANN development environment.
- Construct AI applications through MindSpore and CloudMatrix workflows.
- Enhance performance on Ascend NPUs via custom operators and tiling techniques.
- Deploy models to either edge or cloud settings.
Course Format
- Engaging lectures and discussions.
- Practical application of Huawei Ascend and the CANN toolkit within sample projects.
- Supervised exercises concentrating on model creation, training, and deployment.
Options for Course Customization
- For tailored training needs aligned with your specific infrastructure or datasets, please reach out to us to arrange.
Deploying AI Models with CANN and Ascend AI Processors
14 HoursCANN (Compute Architecture for Neural Networks) serves as Huawei’s AI compute stack, designed for deploying and optimizing AI models on Ascend AI processors.
This instructor-led training session, available both online and onsite, is tailored for intermediate-level AI developers and engineers aiming to efficiently deploy trained AI models onto Huawei Ascend hardware. The curriculum utilizes the CANN toolkit alongside frameworks such as MindSpore, TensorFlow, and PyTorch.
Upon completion of this training, participants will be equipped to:
- Comprehend the CANN architecture and its significance within the AI deployment pipeline.
- Convert and adapt models from leading frameworks into formats compatible with Ascend.
- Leverage tools like ATC, OM model conversion, and MindSpore for inference tasks in both edge and cloud environments.
- Identify deployment challenges and optimize performance on Ascend hardware.
Course Format
- Interactive lectures paired with live demonstrations.
- Practical lab exercises utilizing CANN tools and Ascend simulators or physical devices.
- Real-world deployment scenarios based on actual AI models.
Customization Options
- To arrange a customized training version of this course, please get in touch with us.
AI Inference and Deployment with CloudMatrix
21 HoursCloudMatrix is Huawei’s unified platform for AI development and deployment, designed to support scalable, production-grade inference pipelines.
This instructor-led live training (available online or onsite) targets beginner to intermediate AI professionals looking to deploy and monitor AI models using the CloudMatrix platform, integrated with CANN and MindSpore.
Upon completion of this training, participants will be able to:
- Utilize CloudMatrix for model packaging, deployment, and serving.
- Convert and optimize models for Ascend chipsets.
- Establish pipelines for both real-time and batch inference tasks.
- Monitor deployments and tune performance in production environments.
Course Format
- Interactive lectures and discussions.
- Hands-on practice with CloudMatrix in real deployment scenarios.
- Guided exercises focusing on conversion, optimization, and scaling.
Customization Options
- To request a customized version of this course tailored to your specific AI infrastructure or cloud environment, please contact us to make arrangements.
GPU Programming on Biren AI Accelerators
21 HoursBiren AI Accelerators are high-performance GPUs engineered for AI and HPC workloads, supporting large-scale training and inference.
This instructor-led live training (available online or onsite) targets intermediate to advanced developers who wish to program and optimize applications using Biren’s proprietary GPU stack, with practical comparisons to CUDA-based environments.
By the end of this training, participants will be able to:
- Understand Biren GPU architecture and memory hierarchy.
- Set up the development environment and use Biren’s programming model.
- Translate and optimize CUDA-style code for Biren platforms.
- Apply performance tuning and debugging techniques.
Format of the Course
- Interactive lecture and discussion.
- Hands-on use of Biren SDK in sample GPU workloads.
- Guided exercises focused on porting and performance tuning.
Course Customization Options
- To request a customized training for this course based on your application stack or integration needs, please contact us to arrange.
Cambricon MLU Development with BANGPy and Neuware
21 HoursCambricon MLUs (Machine Learning Units) are specialized AI chips designed for optimized inference and training in both edge computing and data center environments.
This instructor-led, live training session (available online or on-site) is designed for intermediate-level developers who want to build and deploy AI models using the BANGPy framework and Neuware SDK on Cambricon MLU hardware.
Upon completing this training, participants will be able to:
- Set up and configure development environments for BANGPy and Neuware.
- Develop and optimize Python- and C++-based models tailored for Cambricon MLUs.
- Deploy models to edge devices and data centers running the Neuware runtime.
- Integrate machine learning workflows with acceleration features specific to MLUs.
Course Format
- Interactive lectures and discussions.
- Hands-on practice using BANGPy and Neuware for development and deployment.
- Guided exercises focusing on optimization, integration, and testing.
Course Customization Options
- To request a customized training session tailored to your specific Cambricon device model or use case, please contact us to arrange.
Introduction to CANN for AI Framework Developers
7 HoursCANN (Compute Architecture for Neural Networks) is Huawei’s AI computing toolkit designed to compile, optimize, and deploy AI models on Ascend AI processors.
This instructor-led live training, available both online and onsite, is tailored for beginner-level AI developers who want to grasp how CANN integrates into the model lifecycle, from training through deployment, and how it collaborates with frameworks such as MindSpore, TensorFlow, and PyTorch.
Upon completing this training, participants will be able to:
- Comprehend the purpose and architecture of the CANN toolkit.
- Configure a development environment featuring CANN and MindSpore.
- Convert and deploy a simple AI model onto Ascend hardware.
- Acquire foundational knowledge to support future CANN optimization or integration initiatives.
Course Format
- Interactive lectures and discussions.
- Practical labs focused on simple model deployment.
- Step-by-step walkthroughs of the CANN toolchain and its integration points.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
CANN for Edge AI Deployment
14 HoursHuawei's Ascend CANN toolkit empowers AI inference on edge devices like the Ascend 310. It offers critical tools for compiling, optimizing, and deploying models in environments with limited compute and memory resources.
This instructor-led, live training (available online or onsite) targets intermediate-level AI developers and integrators who want to deploy and optimize models on Ascend edge devices using the CANN toolchain.
Upon completing this training, participants will be able to:
- Prepare and convert AI models for the Ascend 310 using CANN tools.
- Construct lightweight inference pipelines using MindSpore Lite and AscendCL.
- Optimize model performance for scenarios with constrained compute and memory.
- Deploy and monitor AI applications in real-world edge use cases.
Format of the Course
- Interactive lectures and demonstrations.
- Hands-on labs featuring edge-specific models and scenarios.
- Live deployment examples on virtual or physical edge hardware.
Course Customization Options
- To request customized training for this course, please contact us to arrange it.
Understanding Huawei’s AI Compute Stack: From CANN to MindSpore
14 HoursHuawei’s AI stack, spanning from the low-level CANN SDK to the high-level MindSpore framework, provides a tightly integrated environment for AI development and deployment, specifically optimized for Ascend hardware.
This live, instructor-led training (available online or onsite) targets beginner to intermediate technical professionals who want to understand how CANN and MindSpore components collaborate to support AI lifecycle management and infrastructure decisions.
Upon completion of this training, participants will be able to:
- Comprehend the layered architecture of Huawei’s AI compute stack.
- Recognize how CANN facilitates model optimization and hardware-level deployment.
- Assess the MindSpore framework and its toolchain in comparison to industry alternatives.
- Position Huawei's AI stack within enterprise or cloud/on-premises environments.
Course Format
- Interactive lectures and discussions.
- Live system demonstrations and case-based walkthroughs.
- Optional guided labs covering the model flow from MindSpore to CANN.
Course Customization Options
- To request customized training for this course, please contact us to arrange.
Optimizing Neural Network Performance with CANN SDK
14 HoursThe CANN SDK (Compute Architecture for Neural Networks) serves as Huawei’s foundational AI compute platform, enabling developers to refine and boost the performance of neural networks deployed on Ascend AI processors.
This instructor-led live training, available either online or on-site, is designed for advanced AI developers and system engineers aiming to enhance inference performance through CANN’s sophisticated toolset, which includes the Graph Engine, TIK, and custom operator development capabilities.
Upon completing this training, participants will be able to:
- Grasp CANN’s runtime architecture and its performance lifecycle.
- Utilize profiling tools and the Graph Engine for performance analysis and optimization.
- Develop and optimize custom operators using TIK and TVM.
- Address memory bottlenecks and increase model throughput.
Course Format
- Interactive lectures and discussions.
- Practical labs featuring real-time profiling and operator tuning.
- Optimization exercises based on edge-case deployment scenarios.
Customization Options
- To arrange a tailored version of this course, please contact us.
CANN SDK for Computer Vision and NLP Pipelines
14 HoursThe CANN SDK (Compute Architecture for Neural Networks) offers robust deployment and optimization tools tailored for real-time AI applications in computer vision and NLP, particularly on Huawei Ascend hardware.
This instructor-led live training (available online or onsite) targets intermediate-level AI practitioners looking to build, deploy, and optimize vision and language models with the CANN SDK for production environments.
Upon completion of this training, participants will be able to:
- Deploy and optimize CV and NLP models utilizing CANN and AscendCL.
- Leverage CANN tools to convert models and integrate them into live pipelines.
- Enhance inference performance for tasks such as detection, classification, and sentiment analysis.
- Construct real-time CV/NLP pipelines for edge or cloud-based deployment scenarios.
Course Format
- Interactive lectures combined with live demonstrations.
- Practical labs focused on model deployment and performance profiling.
- Real-time pipeline design using practical CV and NLP use cases.
Course Customization Options
- To request a customized training session for this course, please reach out to us for arrangement.
Building Custom AI Operators with CANN TIK and TVM
14 HoursThe combination of CANN TIK (Tensor Instruction Kernel) and Apache TVM facilitates advanced optimization and customization of AI model operators specifically for Huawei Ascend hardware.
This instructor-led training session, available both online and in-person, targets advanced system developers looking to construct, deploy, and fine-tune custom operators for AI models utilizing CANN’s TIK programming model and its integration with the TVM compiler.
Upon completing this training, participants will be equipped to:
- Develop and test custom AI operators by leveraging the TIK DSL for Ascend processors.
- Incorporate custom operators into the CANN runtime and execution graph.
- Apply TVM for operator scheduling, automatic tuning, and performance benchmarking.
- Debug and enhance instruction-level performance for complex custom computation patterns.
Course Format
- Interactive lectures combined with live demonstrations.
- Practical coding exercises for operators using TIK and TVM pipelines.
- Hands-on testing and tuning on Ascend hardware or in simulator environments.
Options for Course Customization
- To request a tailored version of this course, please get in touch with us to arrange the details.
Migrating CUDA Applications to Chinese GPU Architectures
21 HoursChinese GPU architectures, including Huawei Ascend, Biren, and Cambricon MLUs, provide CUDA alternatives specifically designed for the local AI and high-performance computing (HPC) markets.
This instructor-led live training (available online or onsite) is designed for advanced GPU programmers and infrastructure specialists seeking to migrate and optimize existing CUDA applications for deployment on Chinese hardware platforms.
Upon completion of this training, participants will be able to:
- Assess the compatibility of current CUDA workloads with Chinese chip alternatives.
- Port CUDA codebases to Huawei CANN, Biren SDK, and Cambricon BANGPy environments.
- Compare performance metrics and identify optimization opportunities across different platforms.
- Address practical challenges related to cross-architecture support and deployment.
Course Format
- Interactive lectures and discussions.
- Practical labs focused on code translation and performance comparisons.
- Guided exercises centered on multi-GPU adaptation strategies.
Customization Options
- To request a customized training for this course tailored to your specific platform or CUDA project, please contact us to arrange.
Performance Optimization on Ascend, Biren, and Cambricon
21 HoursAscend, Biren, and Cambricon represent the forefront of AI hardware platforms in China, each providing distinct acceleration and profiling capabilities tailored for large-scale AI workloads.
This instructor-led live training, available both online and onsite, is designed for advanced AI infrastructure and performance engineers seeking to optimize model inference and training workflows across these diverse Chinese AI chip ecosystems.
Upon completion of this training, participants will be equipped to:
- Conduct benchmarking of models on Ascend, Biren, and Cambricon platforms.
- Diagnose system bottlenecks and identify memory or compute inefficiencies.
- Implement optimizations at the graph, kernel, and operator levels.
- Refine deployment pipelines to enhance throughput and reduce latency.
Course Format
- Interactive lectures and discussions.
- Practical application of profiling and optimization tools on each platform.
- Guided exercises centered on real-world tuning scenarios.
Customization Options
- To arrange customized training tailored to your specific performance environment or model architecture, please contact us.