Reinforcement Learning: An Introduction, 2nd Edition Front Cover

Reinforcement Learning: An Introduction, 2nd Edition

  • Length: 552 pages
  • Edition: second edition
  • Publisher:
  • Publication Date: 2018-11-13
  • ISBN-10: 0262039249
  • ISBN-13: 9780262039246
  • Sales Rank: #8167 (See Top 100 Books)
Description

The significantly expanded and updated new edition of a widely used text on reinforcement learning, one of the most active research areas in artificial intelligence.

Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives while interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the field’s key ideas and algorithms. This second edition has been significantly expanded and updated, presenting new topics and updating coverage of other topics.

Like the first edition, this second edition focuses on core online learning algorithms, with the more mathematical material set off in shaded boxes. Part I covers as much of reinforcement learning as possible without going beyond the tabular case for which exact solutions can be found. Many algorithms presented in this part are new to the second edition, including UCB, Expected Sarsa, and Double Learning. Part II extends these ideas to function approximation, with new sections on such topics as artificial neural networks and the Fourier basis, and offers expanded treatment of off-policy learning and policy-gradient methods. Part III has new chapters on reinforcement learning’s relationships to psychology and neuroscience, as well as an updated case-studies chapter including AlphaGo and AlphaGo Zero, Atari game playing, and IBM Watson’s wagering strategy. The final chapter discusses the future societal impacts of reinforcement learning.

Table of Contents

Part I Tabular Solution Methods
Chapter 1 Multi-Armed Bandits
Chapter 2 Finite Markov Decision Processes
Chapter 3 Dynamic Programming
Chapter 4 Monte Carlo Methods
Chapter 5 Temporal-Difference Learning
Chapter 6 N-Step Bootstrapping
Chapter 7 Planning And Learning With Tabular Methods

Part II Approximate Solution Methods
Chapter 8 On-Policy Prediction With Approximation
Chapter 9 On-Policy Control With Approximation
Chapter 10 *Off-Policy Methods With Approximation
Chapter 11 Eligibility Traces
Chapter 12 Policy Gradient Methods

Part III Looking Deeper
Chapter 13 Psychology
Chapter 14 Neuroscience
Chapter 15 Applications And Case Studies
Chapter 16 Frontiers

To access the link, solve the captcha.