Cybersecurity in AI Systems Training Course
Protecting AI systems involves distinct challenges that set them apart from conventional cybersecurity methods. These systems are susceptible to adversarial attacks, data poisoning, and model theft, any of which can severely disrupt business operations and compromise data integrity. This course delves into essential cybersecurity practices for AI environments, addressing adversarial machine learning, safeguarding data within machine learning pipelines, and meeting compliance standards for robust AI deployment.
Designed for intermediate-level professionals in AI and cybersecurity, this instructor-led, live training (available online or onsite) helps participants grasp and mitigate security vulnerabilities specific to AI models and systems. It is particularly relevant for industries with strict regulatory frameworks, such as finance, data governance, and consulting.
Upon completing this training, participants will be equipped to:
- Identify types of adversarial attacks targeting AI systems and learn defense strategies.
- Apply model hardening techniques to fortify machine learning pipelines.
- Safeguard data security and integrity across machine learning models.
- Navigate regulatory compliance requirements associated with AI security.
Course Format
- Interactive lectures and discussions.
- Ample exercises and practical applications.
- Hands-on implementation within a live lab environment.
Customization Options
- To request tailored training for this course, please contact us to arrange details.
Course Outline
Introduction to AI Security Challenges
- Understanding security risks unique to AI systems
- Comparing traditional cybersecurity vs. AI cybersecurity
- Overview of attack surfaces in AI models
Adversarial Machine Learning
- Types of adversarial attacks: evasion, poisoning, and extraction
- Implementing adversarial defenses and countermeasures
- Case studies on adversarial attacks in different industries
Model Hardening Techniques
- Introduction to model robustness and hardening
- Techniques for reducing model vulnerability to attacks
- Hands-on with defensive distillation and other hardening methods
Data Security in Machine Learning
- Securing data pipelines for training and inference
- Preventing data leakage and model inversion attacks
- Best practices for managing sensitive data in AI systems
AI Security Compliance and Regulatory Requirements
- Understanding regulations around AI and data security
- Compliance with GDPR, CCPA, and other data protection laws
- Developing secure and compliant AI models
Monitoring and Maintaining AI System Security
- Implementing continuous monitoring for AI systems
- Logging and auditing for security in machine learning
- Responding to AI security incidents and breaches
Future Trends in AI Cybersecurity
- Emerging techniques in securing AI and machine learning
- Opportunities for innovation in AI cybersecurity
- Preparing for future AI security challenges
Summary and Next Steps
Requirements
- Foundational knowledge of machine learning and AI concepts
- Familiarity with cybersecurity principles and practices
Target Audience
- AI and machine learning engineers seeking to enhance security in AI systems
- Cybersecurity professionals specializing in AI model protection
- Compliance and risk management professionals in data governance and security
Open Training Courses require 5+ participants.
Cybersecurity in AI Systems Training Course - Booking
Cybersecurity in AI Systems Training Course - Enquiry
Cybersecurity in AI Systems - Consultancy Enquiry
Testimonials (1)
The profesional knolage and the way how he presented it before us
Miroslav Nachev - PUBLIC COURSE
Course - Cybersecurity in AI Systems
Upcoming Courses
Related Courses
ISACA Advanced in AI Security Management (AAISM)
21 HoursAAISM provides an advanced framework for assessing, governing, and managing security risks in artificial intelligence systems.
This instructor-led, live training (available online or onsite) is designed for advanced-level professionals looking to implement effective security controls and governance practices for enterprise AI environments.
Upon completing this program, participants will be prepared to:
- Evaluate AI security risks using industry-recognized methodologies.
- Implement governance models for responsible AI deployment.
- Align AI security policies with organizational goals and regulatory expectations.
- Enhance resilience and accountability within AI-driven operations.
Format of the Course
- Facilitated lectures supported by expert analysis.
- Practical workshops and assessment-based activities.
- Applied exercises using real-world AI governance scenarios.
Course Customization Options
- For tailored training aligned to your organizational AI strategy, please contact us to customize the course.
AI Governance, Compliance, and Security for Enterprise Leaders
14 HoursThis instructor-led, live training in Brazil (online or onsite) targets intermediate-level enterprise leaders who wish to understand how to govern and secure AI systems responsibly and in compliance with emerging global frameworks such as the EU AI Act, GDPR, ISO/IEC 42001, and the U.S. Executive Order on AI.
Upon completing this training, participants will be able to:
- Grasp the legal, ethical, and regulatory risks associated with using AI across various departments.
- Interpret and implement major AI governance frameworks, including the EU AI Act, NIST AI RMF, and ISO/IEC 42001.
- Establish robust security, auditing, and oversight policies for AI deployment within the enterprise.
- Develop procurement and usage guidelines for both third-party and in-house AI systems.
AI Risk Management and Security in the Public Sector
7 HoursArtificial Intelligence (AI) brings new dimensions of operational risk, governance challenges, and cybersecurity exposure for government agencies and departments.
This instructor-led, live training (online or onsite) is aimed at public sector IT and risk professionals with limited prior experience in AI who wish to understand how to evaluate, monitor, and secure AI systems within a government or regulatory context.
By the end of this training, participants will be able to:
- Interpret key risk concepts related to AI systems, including bias, unpredictability, and model drift.
- Apply AI-specific governance and auditing frameworks such as NIST AI RMF and ISO/IEC 42001.
- Recognize cybersecurity threats targeting AI models and data pipelines.
- Establish cross-departmental risk management plans and policy alignment for AI deployment.
Format of the Course
- Interactive lecture and discussion of public sector use cases.
- AI governance framework exercises and policy mapping.
- Scenario-based threat modeling and risk evaluation.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Introduction to AI Trust, Risk, and Security Management (AI TRiSM)
21 HoursThis instructor-led live training in Brazil (available online or onsite) is designed for IT professionals from beginner to intermediate levels who seek to understand and implement AI TRiSM within their organizations.
Upon completion of this course, participants will be equipped to:
- Comprehend the fundamental concepts and significance of managing trust, risk, and security in AI.
- Detect and address potential risks linked to AI implementations.
- Apply security best practices tailored for AI technologies.
- Gain insight into regulatory compliance and ethical implications relevant to AI.
- Formulate strategies for robust AI governance and management.
Building Secure and Responsible LLM Applications
14 HoursThis instructor-led live training in Brazil (online or onsite) is designed for AI developers, architects, and product managers at an intermediate to advanced level. The course focuses on helping participants identify and mitigate risks associated with LLM-powered applications, including prompt injection, data leakage, and unfiltered outputs, while incorporating security controls like input validation, human-in-the-loop oversight, and output guardrails.
Upon completion of this training, participants will be able to:
- Comprehend the core vulnerabilities inherent in LLM-based systems.
- Apply secure design principles to the architecture of LLM applications.
- Utilize tools such as Guardrails AI and LangChain for validation, filtering, and safety.
- Integrate techniques like sandboxing, red teaming, and human-in-the-loop reviews into production-grade pipelines.
EXO Security and Governance: Offline Model Management
14 HoursThis instructor-led, live training in Brazil (online or onsite) is aimed at security engineers and compliance officers who wish to harden EXO deployments, control model access, and govern AI workloads running entirely on-premise.
Introduction to AI Security and Risk Management
14 HoursThis instructor-led, live training in Brazil (online or onsite) is designed for beginner-level IT security, risk, and compliance professionals seeking to understand foundational AI security concepts, threat vectors, and global frameworks such as the NIST AI RMF and ISO/IEC 42001.
Upon completing this training, participants will be capable of:
- Comprehending the unique security risks inherent to AI systems.
- Recognizing threat vectors like adversarial attacks, data poisoning, and model inversion.
- Applying foundational governance models, including the NIST AI Risk Management Framework.
- Aligning AI usage with emerging standards, compliance guidelines, and ethical principles.
OWASP GenAI Security
14 HoursBased on the latest OWASP GenAI Security Project guidance, participants will learn to identify, assess, and mitigate AI-specific threats through hands-on exercises and real-world scenarios.
Privacy-Preserving Machine Learning
14 HoursThis instructor-led, live training in Brazil (online or in-person) targets experienced professionals seeking to implement and evaluate techniques such as federated learning, secure multiparty computation, homomorphic encryption, and differential privacy in real-world machine learning pipelines.
By the conclusion of this training, participants will be able to:
- Understand and compare key privacy-preserving techniques in ML.
- Implement federated learning systems using open-source frameworks.
- Apply differential privacy for safe data sharing and model training.
- Use encryption and secure computation techniques to protect model inputs and outputs.
Red Teaming AI Systems: Offensive Security for ML Models
14 HoursThis instructor-led live training in Brazil (offered online or onsite) targets advanced security professionals and ML specialists seeking to simulate attacks on AI systems, uncover vulnerabilities, and strengthen the robustness of deployed AI models.
By the end of this training, participants will be able to:
- Simulate real-world threats to machine learning models.
- Generate adversarial examples to test model robustness.
- Assess the attack surface of AI APIs and pipelines.
- Design red teaming strategies for AI deployment environments.
Securing Edge AI and Embedded Intelligence
14 HoursThis instructor-led, live training in Brazil (online or onsite) is aimed at intermediate-level engineers and security professionals who wish to secure AI models deployed at the edge against threats such as tampering, data leakage, adversarial inputs, and physical attacks.
By the end of this training, participants will be able to:
- Identify and assess security risks in edge AI deployments.
- Apply tamper resistance and encrypted inference techniques.
- Harden edge-deployed models and secure data pipelines.
- Implement threat mitigation strategies specific to embedded and constrained systems.
Securing AI Models: Threats, Attacks, and Defenses
14 HoursThis instructor-led live training in Brazil (online or on-site) is tailored for intermediate-level professionals in machine learning and cybersecurity who seek to understand and mitigate emerging threats against AI models. The course combines conceptual frameworks with hands-on defenses, such as robust training and differential privacy.
By the end of this training, participants will be able to:
- Identify and classify AI-specific threats like adversarial attacks, inversion, and poisoning.
- Use tools such as the Adversarial Robustness Toolbox (ART) to simulate attacks and test models.
- Apply practical defenses including adversarial training, noise injection, and privacy-preserving techniques.
- Design threat-aware model evaluation strategies for production environments.
Security and Privacy in TinyML Applications
21 HoursTinyML involves deploying machine learning models on low-power, resource-constrained devices at the network edge.
This instructor-led live training (available online or onsite) is designed for advanced professionals aiming to secure TinyML pipelines and integrate privacy-preserving techniques into edge AI applications.
Upon completing this course, participants will be able to:
- Recognize security risks specific to on-device TinyML inference.
- Implement privacy mechanisms for edge AI deployments.
- Strengthen TinyML models and embedded systems against adversarial threats.
- Apply best practices for secure data handling in resource-constrained environments.
Course Format
- Interactive lectures accompanied by expert-led discussions.
- Practical exercises focused on real-world threat scenarios.
- Hands-on implementation using embedded security tools and TinyML platforms.
Customization Options
- Organizations can request a customized version of this training to align with their specific security and compliance requirements.
Safe & Secure Agentic AI: Governance, Identity, and Red-Teaming
21 HoursThis course provides comprehensive coverage of governance, identity management, and adversarial testing for agentic AI systems, with a focus on enterprise-safe deployment patterns and practical red-teaming techniques.
Delivered as instructor-led live training (available online or onsite), this program targets advanced-level practitioners aiming to design, secure, and evaluate agent-based AI systems within production environments.
Upon completion, participants will be able to:
- Establish governance models and policies to ensure the safe deployment of agentic AI.
- Design non-human identity and authentication flows for agents, adhering to least-privilege access principles.
- Implement access controls, audit trails, and observability mechanisms specifically tailored for autonomous agents.
- Plan and execute red-team exercises to identify potential misuses, escalation paths, and risks of data exfiltration.
- Mitigate common threats to agentic systems through effective policy, engineering controls, and continuous monitoring.
Format of the Course
- Interactive lectures combined with threat-modeling workshops.
- Hands-on labs covering identity provisioning, policy enforcement, and adversary simulation.
- Red-team versus blue-team exercises and a final course assessment.
Course Customization Options
- To request customized training for this course, please contact us to arrange.