LLM Design Patterns: A Practical Guide to Building Robust and Efficient AI Systems Front Cover

LLM Design Patterns: A Practical Guide to Building Robust and Efficient AI Systems

  • Length: 534 pages
  • Edition: 1
  • Publisher:
  • Publication Date: 2025/05/30
  • ISBN-10: 1836207034
  • ISBN-13: 9781836207030
Description

Explore reusable design patterns, including data-centric approaches, model development, model fine-tuning, and RAG for LLM application development and advanced prompting techniques

Key Features

  • Learn comprehensive LLM development, including data prep, training pipelines, and optimization
  • Explore advanced prompting techniques, such as chain-of-thought, tree-of-thought, RAG, and AI agents
  • Implement evaluation metrics, interpretability, and bias detection for fair, reliable models
  • Print or Kindle purchase includes a free PDF eBook

Book Description

This practical guide for AI professionals enables you to build on the power of design patterns to develop robust, scalable, and efficient large language models (LLMs). Written by a global AI expert and popular author driving standards and innovation in Generative AI, security, and strategy, this book covers the end-to-end lifecycle of LLM development and introduces reusable architectural and engineering solutions to common challenges in data handling, model training, evaluation, and deployment.

You’ll learn to clean, augment, and annotate large-scale datasets, architect modular training pipelines, and optimize models using hyperparameter tuning, pruning, and quantization. The chapters help you explore regularization, checkpointing, fine-tuning, and advanced prompting methods, such as reason-and-act, as well as implement reflection, multi-step reasoning, and tool use for intelligent task completion. The book also highlights Retrieval-Augmented Generation (RAG), graph-based retrieval, interpretability, fairness, and RLHF, culminating in the creation of agentic LLM systems.

By the end of this book, you’ll be equipped with the knowledge and tools to build next-generation LLMs that are adaptable, efficient, safe, and aligned with human values.

What you will learn

  • Implement efficient data prep techniques, including cleaning and augmentation
  • Design scalable training pipelines with tuning, regularization, and checkpointing
  • Optimize LLMs via pruning, quantization, and fine-tuning
  • Evaluate models with metrics, cross-validation, and interpretability
  • Understand fairness and detect bias in outputs
  • Develop RLHF strategies to build secure, agentic AI systems

Who this book is for

This book is essential for AI engineers, architects, data scientists, and software engineers responsible for developing and deploying AI systems powered by large language models. A basic understanding of machine learning concepts and experience in Python programming is a must.

Table of Contents

  1. Introduction to LLM Design Patterns
  2. Data Cleaning for LLM Training
  3. Data Augmentation
  4. Handling Large Datasets for LLM Training
  5. Data Versioning
  6. Dataset Annotation and Labeling
  7. Training Pipeline
  8. Hyperparameter Tuning
  9. Regularization
  10. Checkpointing and Recovery
  11. Fine-Tuning
  12. Model Pruning
  13. Quantization
  14. Evaluation Metrics
  15. Cross-Validation
  16. Interpretability
  17. Fairness and Bias Detection
  18. Adversarial Robustness
  19. Reinforcement Learning from Human Feedback
  20. Chain-of-Thought Prompting
  21. Tree-of-Thoughts Prompting
  22. Reasoning and Acting
  23. Reasoning WithOut Observation
  24. Reflection Techniques
  25. Automatic Multi-Step Reasoning and Tool Use
  26. Retrieval-Augmented Generation
  27. Graph-Based RAG
  28. Advanced RAG
  29. Evaluating RAG Systems
  30. Agentic Patterns
To access the link, solve the captcha.