Understanding Large Language Models: Learning Their Underlying Concepts and Technologies Front Cover

Understanding Large Language Models: Learning Their Underlying Concepts and Technologies

  • Length: 156 pages
  • Edition: 1
  • Publisher:
  • Publication Date: 2024-01-03
  • ISBN-10: B0CJ2C8TXQ
  • ISBN-13: 9798868800160
  • Sales Rank: #0 (See Top 100 Books)
Description

This book will teach you the underlying concepts of large language models (LLMs), as well as the technologies associated with them.

The book starts with an introduction to the rise of conversational AIs such as ChatGPT, and how they are related to the broader spectrum of large language models. From there, you will learn about natural language processing (NLP), its core concepts, and how it has led to the rise of LLMs. Next, you will gain insight into transformers and how their characteristics, such as self-attention, enhance the capabilities of language modeling, along with the unique capabilities of LLMs. The book concludes with an exploration of the architectures of various LLMs and the opportunities presented by their ever-increasing capabilities―as well as the dangers of their misuse.

After completing this book, you will have a thorough understanding of LLMs and will be ready to take your first steps in implementing them into your own projects.

What You Will Learn

  • Grasp the underlying concepts of LLMs
  • Gain insight into how the concepts and approaches of NLP have evolved over the years
  • Understand transformer models and attention mechanisms
  • Explore different types of LLMs and their applications
  • Understand the architectures of popular LLMs
  • Delve into misconceptions and concerns about LLMs, as well as how to best utilize them

Who This Book Is For

Anyone interested in learning the foundational concepts of NLP, LLMs, and recent advancements of deep learning

To access the link, solve the captcha.