Migrating CUDA Applications to Chinese GPU Architectures Training Course
Chinese GPU architectures, including Huawei Ascend, Biren, and Cambricon MLUs, provide CUDA alternatives specifically designed for the local AI and high-performance computing (HPC) markets.
This instructor-led live training (available online or onsite) is designed for advanced GPU programmers and infrastructure specialists seeking to migrate and optimize existing CUDA applications for deployment on Chinese hardware platforms.
Upon completion of this training, participants will be able to:
- Assess the compatibility of current CUDA workloads with Chinese chip alternatives.
- Port CUDA codebases to Huawei CANN, Biren SDK, and Cambricon BANGPy environments.
- Compare performance metrics and identify optimization opportunities across different platforms.
- Address practical challenges related to cross-architecture support and deployment.
Course Format
- Interactive lectures and discussions.
- Practical labs focused on code translation and performance comparisons.
- Guided exercises centered on multi-GPU adaptation strategies.
Customization Options
- To request a customized training for this course tailored to your specific platform or CUDA project, please contact us to arrange.
Course Outline
Overview of the Chinese AI GPU Ecosystem
- Comparison of Huawei Ascend, Biren, and Cambricon MLU
- CUDA versus CANN, Biren SDK, and BANGPy models
- Industry trends and vendor ecosystems
Preparing for Migration
- Assessing your CUDA codebase
- Identifying target platforms and SDK versions
- Toolchain installation and environment setup
Code Translation Techniques
- Porting CUDA memory access and kernel logic
- Mapping compute grid/thread models
- Automated versus manual translation options
Platform-Specific Implementations
- Using Huawei CANN operators and custom kernels
- Biren SDK conversion pipeline
- Rebuilding models with BANGPy (Cambricon)
Cross-Platform Testing and Optimization
- Profiling execution on each target platform
- Memory tuning and parallel execution comparisons
- Performance tracking and iteration
Managing Mixed GPU Environments
- Hybrid deployments with multiple architectures
- Fallback strategies and device detection
- Abstraction layers for code maintainability
Case Studies and Best Practices
- Porting vision/NLP models to Ascend or Cambricon
- Retrofitting inference pipelines on Biren clusters
- Handling version mismatches and API gaps
Summary and Next Steps
Requirements
- Experience programming with CUDA or GPU-based applications
- Understanding of GPU memory models and compute kernels
- Familiarity with AI model deployment or acceleration workflows
Audience
- GPU programmers
- System architects
- Porting specialists
Open Training Courses require 5+ participants.
Migrating CUDA Applications to Chinese GPU Architectures Training Course - Booking
Migrating CUDA Applications to Chinese GPU Architectures Training Course - Enquiry
Migrating CUDA Applications to Chinese GPU Architectures - Consultancy Enquiry
Upcoming Courses
Related Courses
Developing AI Applications with Huawei Ascend and CANN
21 HoursHuawei Ascend comprises a series of AI processors engineered for superior inference and training performance.
This instructor-led training session, available both online and onsite, targets intermediate AI engineers and data scientists aiming to create and refine neural network models utilizing Huawei’s Ascend platform alongside the CANN toolkit.
Upon completion of this program, participants will gain the ability to:
- Establish and configure the CANN development environment.
- Construct AI applications through MindSpore and CloudMatrix workflows.
- Enhance performance on Ascend NPUs via custom operators and tiling techniques.
- Deploy models to either edge or cloud settings.
Course Format
- Engaging lectures and discussions.
- Practical application of Huawei Ascend and the CANN toolkit within sample projects.
- Supervised exercises concentrating on model creation, training, and deployment.
Options for Course Customization
- For tailored training needs aligned with your specific infrastructure or datasets, please reach out to us to arrange.
Deploying AI Models with CANN and Ascend AI Processors
14 HoursCANN (Compute Architecture for Neural Networks) serves as Huawei’s AI compute stack, designed for deploying and optimizing AI models on Ascend AI processors.
This instructor-led training session, available both online and onsite, is tailored for intermediate-level AI developers and engineers aiming to efficiently deploy trained AI models onto Huawei Ascend hardware. The curriculum utilizes the CANN toolkit alongside frameworks such as MindSpore, TensorFlow, and PyTorch.
Upon completion of this training, participants will be equipped to:
- Comprehend the CANN architecture and its significance within the AI deployment pipeline.
- Convert and adapt models from leading frameworks into formats compatible with Ascend.
- Leverage tools like ATC, OM model conversion, and MindSpore for inference tasks in both edge and cloud environments.
- Identify deployment challenges and optimize performance on Ascend hardware.
Course Format
- Interactive lectures paired with live demonstrations.
- Practical lab exercises utilizing CANN tools and Ascend simulators or physical devices.
- Real-world deployment scenarios based on actual AI models.
Customization Options
- To arrange a customized training version of this course, please get in touch with us.
AI Inference and Deployment with CloudMatrix
21 HoursCloudMatrix is Huawei’s unified platform for AI development and deployment, designed to support scalable, production-grade inference pipelines.
This instructor-led live training (available online or onsite) targets beginner to intermediate AI professionals looking to deploy and monitor AI models using the CloudMatrix platform, integrated with CANN and MindSpore.
Upon completion of this training, participants will be able to:
- Utilize CloudMatrix for model packaging, deployment, and serving.
- Convert and optimize models for Ascend chipsets.
- Establish pipelines for both real-time and batch inference tasks.
- Monitor deployments and tune performance in production environments.
Course Format
- Interactive lectures and discussions.
- Hands-on practice with CloudMatrix in real deployment scenarios.
- Guided exercises focusing on conversion, optimization, and scaling.
Customization Options
- To request a customized version of this course tailored to your specific AI infrastructure or cloud environment, please contact us to make arrangements.
GPU Programming on Biren AI Accelerators
21 HoursBiren AI Accelerators are high-performance GPUs engineered for AI and HPC workloads, supporting large-scale training and inference.
This instructor-led live training (available online or onsite) targets intermediate to advanced developers who wish to program and optimize applications using Biren’s proprietary GPU stack, with practical comparisons to CUDA-based environments.
By the end of this training, participants will be able to:
- Understand Biren GPU architecture and memory hierarchy.
- Set up the development environment and use Biren’s programming model.
- Translate and optimize CUDA-style code for Biren platforms.
- Apply performance tuning and debugging techniques.
Format of the Course
- Interactive lecture and discussion.
- Hands-on use of Biren SDK in sample GPU workloads.
- Guided exercises focused on porting and performance tuning.
Course Customization Options
- To request a customized training for this course based on your application stack or integration needs, please contact us to arrange.
Cambricon MLU Development with BANGPy and Neuware
21 HoursCambricon MLUs (Machine Learning Units) are specialized AI chips designed for optimized inference and training in both edge computing and data center environments.
This instructor-led, live training session (available online or on-site) is designed for intermediate-level developers who want to build and deploy AI models using the BANGPy framework and Neuware SDK on Cambricon MLU hardware.
Upon completing this training, participants will be able to:
- Set up and configure development environments for BANGPy and Neuware.
- Develop and optimize Python- and C++-based models tailored for Cambricon MLUs.
- Deploy models to edge devices and data centers running the Neuware runtime.
- Integrate machine learning workflows with acceleration features specific to MLUs.
Course Format
- Interactive lectures and discussions.
- Hands-on practice using BANGPy and Neuware for development and deployment.
- Guided exercises focusing on optimization, integration, and testing.
Course Customization Options
- To request a customized training session tailored to your specific Cambricon device model or use case, please contact us to arrange.
Introduction to CANN for AI Framework Developers
7 HoursCANN (Compute Architecture for Neural Networks) is Huawei’s AI computing toolkit designed to compile, optimize, and deploy AI models on Ascend AI processors.
This instructor-led live training, available both online and onsite, is tailored for beginner-level AI developers who want to grasp how CANN integrates into the model lifecycle, from training through deployment, and how it collaborates with frameworks such as MindSpore, TensorFlow, and PyTorch.
Upon completing this training, participants will be able to:
- Comprehend the purpose and architecture of the CANN toolkit.
- Configure a development environment featuring CANN and MindSpore.
- Convert and deploy a simple AI model onto Ascend hardware.
- Acquire foundational knowledge to support future CANN optimization or integration initiatives.
Course Format
- Interactive lectures and discussions.
- Practical labs focused on simple model deployment.
- Step-by-step walkthroughs of the CANN toolchain and its integration points.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
CANN for Edge AI Deployment
14 HoursHuawei's Ascend CANN toolkit empowers AI inference on edge devices like the Ascend 310. It offers critical tools for compiling, optimizing, and deploying models in environments with limited compute and memory resources.
This instructor-led, live training (available online or onsite) targets intermediate-level AI developers and integrators who want to deploy and optimize models on Ascend edge devices using the CANN toolchain.
Upon completing this training, participants will be able to:
- Prepare and convert AI models for the Ascend 310 using CANN tools.
- Construct lightweight inference pipelines using MindSpore Lite and AscendCL.
- Optimize model performance for scenarios with constrained compute and memory.
- Deploy and monitor AI applications in real-world edge use cases.
Format of the Course
- Interactive lectures and demonstrations.
- Hands-on labs featuring edge-specific models and scenarios.
- Live deployment examples on virtual or physical edge hardware.
Course Customization Options
- To request customized training for this course, please contact us to arrange it.
Understanding Huawei’s AI Compute Stack: From CANN to MindSpore
14 HoursHuawei’s AI stack, spanning from the low-level CANN SDK to the high-level MindSpore framework, provides a tightly integrated environment for AI development and deployment, specifically optimized for Ascend hardware.
This live, instructor-led training (available online or onsite) targets beginner to intermediate technical professionals who want to understand how CANN and MindSpore components collaborate to support AI lifecycle management and infrastructure decisions.
Upon completion of this training, participants will be able to:
- Comprehend the layered architecture of Huawei’s AI compute stack.
- Recognize how CANN facilitates model optimization and hardware-level deployment.
- Assess the MindSpore framework and its toolchain in comparison to industry alternatives.
- Position Huawei's AI stack within enterprise or cloud/on-premises environments.
Course Format
- Interactive lectures and discussions.
- Live system demonstrations and case-based walkthroughs.
- Optional guided labs covering the model flow from MindSpore to CANN.
Course Customization Options
- To request customized training for this course, please contact us to arrange.
Optimizing Neural Network Performance with CANN SDK
14 HoursThe CANN SDK (Compute Architecture for Neural Networks) serves as Huawei’s foundational AI compute platform, enabling developers to refine and boost the performance of neural networks deployed on Ascend AI processors.
This instructor-led live training, available either online or on-site, is designed for advanced AI developers and system engineers aiming to enhance inference performance through CANN’s sophisticated toolset, which includes the Graph Engine, TIK, and custom operator development capabilities.
Upon completing this training, participants will be able to:
- Grasp CANN’s runtime architecture and its performance lifecycle.
- Utilize profiling tools and the Graph Engine for performance analysis and optimization.
- Develop and optimize custom operators using TIK and TVM.
- Address memory bottlenecks and increase model throughput.
Course Format
- Interactive lectures and discussions.
- Practical labs featuring real-time profiling and operator tuning.
- Optimization exercises based on edge-case deployment scenarios.
Customization Options
- To arrange a tailored version of this course, please contact us.
CANN SDK for Computer Vision and NLP Pipelines
14 HoursThe CANN SDK (Compute Architecture for Neural Networks) offers robust deployment and optimization tools tailored for real-time AI applications in computer vision and NLP, particularly on Huawei Ascend hardware.
This instructor-led live training (available online or onsite) targets intermediate-level AI practitioners looking to build, deploy, and optimize vision and language models with the CANN SDK for production environments.
Upon completion of this training, participants will be able to:
- Deploy and optimize CV and NLP models utilizing CANN and AscendCL.
- Leverage CANN tools to convert models and integrate them into live pipelines.
- Enhance inference performance for tasks such as detection, classification, and sentiment analysis.
- Construct real-time CV/NLP pipelines for edge or cloud-based deployment scenarios.
Course Format
- Interactive lectures combined with live demonstrations.
- Practical labs focused on model deployment and performance profiling.
- Real-time pipeline design using practical CV and NLP use cases.
Course Customization Options
- To request a customized training session for this course, please reach out to us for arrangement.
Building Custom AI Operators with CANN TIK and TVM
14 HoursThe combination of CANN TIK (Tensor Instruction Kernel) and Apache TVM facilitates advanced optimization and customization of AI model operators specifically for Huawei Ascend hardware.
This instructor-led training session, available both online and in-person, targets advanced system developers looking to construct, deploy, and fine-tune custom operators for AI models utilizing CANN’s TIK programming model and its integration with the TVM compiler.
Upon completing this training, participants will be equipped to:
- Develop and test custom AI operators by leveraging the TIK DSL for Ascend processors.
- Incorporate custom operators into the CANN runtime and execution graph.
- Apply TVM for operator scheduling, automatic tuning, and performance benchmarking.
- Debug and enhance instruction-level performance for complex custom computation patterns.
Course Format
- Interactive lectures combined with live demonstrations.
- Practical coding exercises for operators using TIK and TVM pipelines.
- Hands-on testing and tuning on Ascend hardware or in simulator environments.
Options for Course Customization
- To request a tailored version of this course, please get in touch with us to arrange the details.
Performance Optimization on Ascend, Biren, and Cambricon
21 HoursAscend, Biren, and Cambricon represent the forefront of AI hardware platforms in China, each providing distinct acceleration and profiling capabilities tailored for large-scale AI workloads.
This instructor-led live training, available both online and onsite, is designed for advanced AI infrastructure and performance engineers seeking to optimize model inference and training workflows across these diverse Chinese AI chip ecosystems.
Upon completion of this training, participants will be equipped to:
- Conduct benchmarking of models on Ascend, Biren, and Cambricon platforms.
- Diagnose system bottlenecks and identify memory or compute inefficiencies.
- Implement optimizations at the graph, kernel, and operator levels.
- Refine deployment pipelines to enhance throughput and reduce latency.
Course Format
- Interactive lectures and discussions.
- Practical application of profiling and optimization tools on each platform.
- Guided exercises centered on real-world tuning scenarios.
Customization Options
- To arrange customized training tailored to your specific performance environment or model architecture, please contact us.