Adversarial AI Attacks, Mitigations, and Defense Strategies: A cybersecurity professional’s guide to AI attacks, threat modeling, and securing AI with MLSecOps Front Cover

Adversarial AI Attacks, Mitigations, and Defense Strategies: A cybersecurity professional’s guide to AI attacks, threat modeling, and securing AI with MLSecOps

  • Length: 586 pages
  • Edition: 1
  • Publisher:
  • Publication Date: 2024-07-26
  • ISBN-10: 1835087981
  • ISBN-13: 9781835087985
Description

Understand how adversarial attacks work against predictive and generative AI, and learn how to safeguard AI and LLM projects with practical examples leveraging OWASP, MITRE, and NIST

Key Features

  • Understand the connection between AI and security by learning about adversarial AI attacks
  • Discover the latest security challenges in adversarial AI by examining GenAI, deepfakes, and LLMs
  • Implement secure-by-design methods and threat modeling, using standards and MLSecOps to safeguard AI systems
  • Purchase of the print or Kindle book includes a free PDF eBook

Book Description

Adversarial attacks trick AI systems with malicious data, creating new security risks by exploiting how AI learns. This challenges cybersecurity as it forces us to defend against a whole new kind of threat. This book demystifies adversarial attacks and equips cybersecurity professionals with the skills to secure AI technologies, moving beyond research hype or business-as-usual strategies.

The strategy-based book is a comprehensive guide to AI security, presenting a structured approach with practical examples to identify and counter adversarial attacks. This book goes beyond a random selection of threats and consolidates recent research and industry standards, incorporating taxonomies from MITRE, NIST, and OWASP. Next, a dedicated section introduces a secure-by-design AI strategy with threat modeling to demonstrate risk-based defenses and strategies, focusing on integrating MLSecOps and LLMOps into security systems. To gain deeper insights, you’ll cover examples of incorporating CI, MLOps, and security controls, including open-access LLMs and ML SBOMs. Based on the classic NIST pillars, the book provides a blueprint for maturing enterprise AI security, discussing the role of AI security in safety and ethics as part of Trustworthy AI.

By the end of this book, you’ll be able to develop, deploy, and secure AI systems effectively.

What you will learn

  • Understand poisoning, evasion, and privacy attacks and how to mitigate them
  • Discover how GANs can be used for attacks and deepfakes
  • Explore how LLMs change security, prompt injections, and data exposure
  • Master techniques to poison LLMs with RAG, embeddings, and fine-tuning
  • Explore supply-chain threats and the challenges of open-access LLMs
  • Implement MLSecOps with CIs, MLOps, and SBOMs

Who this book is for

This book tackles AI security from both angles – offense and defense. AI builders (developers and engineers) will learn how to create secure systems, while cybersecurity professionals, such as security architects, analysts, engineers, ethical hackers, penetration testers, and incident responders will discover methods to combat threats and mitigate risks posed by attackers. The book also provides a secure-by-design approach for leaders to build AI with security in mind. To get the most out of this book, you’ll need a basic understanding of security, ML concepts, and Python.

Table of Contents

  1. Getting Started with AI
  2. Building Our Adversarial Playground
  3. Security and Adversarial AI
  4. Poisoning Attacks
  5. Model Tampering with Trojan Horses and Model Reprogramming
  6. Supply Chain Attacks and Adversarial AI
  7. Evasion Attacks against Deployed AI
  8. Privacy Attacks – Stealing Models
  9. Privacy Attacks – Stealing Data
  10. Privacy-Preserving AI
  11. Generative AI – A New Frontier
  12. Weaponizing GANs for Deepfakes and Adversarial Attacks
  13. LLM Foundations for Adversarial AI
  14. Adversarial Attacks with Prompts
  15. Poisoning Attacks and LLMs
  16. Advanced Generative AI Scenarios
To access the link, solve the captcha.