Probabilistic Machine Learning: An Introduction
90.00 JOD
Please allow 2 – 5 weeks for delivery of this item
Description
A detailed and up-to-date introduction to machine learning, presented through the unifying lens of probabilistic modeling and Bayesian decision theory.This book offers a detailed and up-to-date introduction to machine learning (including deep learning) through the unifying lens of probabilistic modeling and Bayesian decision theory. The book covers mathematical background (including linear algebra and optimization), basic supervised learning (including linear and logistic regression and deep neural networks), as well as more advanced topics (including transfer learning and unsupervised learning). End-of-chapter exercises allow students to apply what they have learned, and an appendix covers notation. Probabilistic Machine Learning grew out of the author’s 2012 book, Machine Learning: A Probabilistic Perspective. More than just a simple update, this is a completely new book that reflects the dramatic developments in the field since 2012, most notably deep learning. In addition, the new book is accompanied by online Python code, using libraries such as scikit-learn, JAX, PyTorch, and Tensorflow, which can be used to reproduce nearly all the figures; this code can be run inside a web browser using cloud-based notebooks, and provides a practical complement to the theoretical topics discussed in the book. This introductory text will be followed by a sequel that covers more advanced topics, taking the same probabilistic approach.
Additional information
Weight | 1.53 kg |
---|---|
Dimensions | 3.23 × 21.11 × 23.5 cm |
PubliCanadanadation City/Country | USA |
by | |
Format | Hardback |
Language | |
Pages | 864 |
Publisher | |
Year Published | 2022-3-1 |
Imprint | |
ISBN 10 | 0262046822 |
About The Author | Kevin P. Murphy is a Research Scientist at Google in Mountain View, California, where he works on AI, machine learning, computer vision, and natural language understanding. |
Other text | “The deep learning revolution has transformed the field of machine learning over the last decade. It was inspired by attempts to mimic the way the brain learns but it is grounded in basic principles of statistics, information theory, decision theory, and optimization. This book does an excellent job of explaining these principles and describes many of the ‘classical’ machine learning methods that make use of them. It also shows how the same principles can be applied in deep learning systems that contain many layers of features. This provides a coherent framework in which one can understand the relationships and tradeoffs between many different ML approaches, both old and new.”—Geoffrey Hinton, Emeritus Professor of Computer Science, University of Toronto; Engineering Fellow, Google |
Table Of Content | 1 Introduction 1I Foundations 292 Probability: Univariate Models 313 Probability: Multivariate Models 754 statistics 1035 Decision Theory 1636 Information Theory 1997 Linear Algebra 2218 Optimization 269II Linear Models 3159 Linear Discriminant Analysis 31710 Logistic Regression 33311 Linear Regression 36512 Generalized Linear Models * 409III Deep Neural Networks 41713 Neural Networks for Structured Data 41914 Neural Networks for Images 46115 Neural Networks for Sequences 497IV Nonparametric Models 53916 Exemplar-based Methods 54117 Kernel Methods * 56118 Trees, Forests, Bagging, and Boosting 597V Beyond Supervised Learning 61919 Learning with Fewer Labeled Examples 62120 Dimensionality Reduction 65121 Clustering 70922 Recommender Systems 73523 Graph Embeddings * 747A Notation 767 |
Series |
Only logged in customers who have purchased this product may leave a review.
Reviews
There are no reviews yet.