To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzes reviews to verify trustworthiness.
The back cover of this book states that this book is for people who want to learn statistics quickly. Before reading this book, I had taken one second-year calculus-based probability course, and one very non-rigorous biostatistics course. As such, I knew nothing of rigorous statistics. I just finished chapter 11. With this in mind, here is a pros and cons list.
Pros: -There is definitely a lot of material. Many reviews say that this book is great as a reference, and I know some stats majors that like it for this reason. -There is no assumed prior knowledge of measure theory, and it isn't required. -When proofs are given, they are usually complete with very few holes to fill. -The book is about 1/4 examples.
Cons: -The material is not very well motivated; we don't really get down to the "why" of all of the statistics. -The book is rife with abuse of notation which is not explicitly introduced. -Definitions are unclear. For example, the author defines the "size alpha Wald test" in such a way that the test is not actually of size alpha (it is asymptotically size alpha). -Many results are stated without proof, even though their proofs are not necessarily difficult, just a bit technical. For example, Neyman-Pearson Lemma is not proven. Moreover, there is often no reference made to other sources should the reader want to see the proof. -Some exercises are not well-posed. -Many topics are rushed, with the applications completely glossed over.
You will learn statistics quickly; you will not learn statistics very well.
Really good book for theoretical statistics, really actualized, and complete. I'm using it for my inference class, which I just started (thus my review might change as the term passes). The content is broad and concise. They skip the really long proves that you can find in most of the books, which makes a to of sense to me.
This book is essentially a summary of the major theoretical topics in statistics, at an introductory level. The focus is on theory, not on data analysis or modeling, but there are more connections to data analysis and modeling than is typical among books on the same topics. The main flaw in this book is not that it does anything poorly, but rather, that it omits a lot.
The book is very balanced in its coverage of different topics, its discussion of the frequentist vs. Bayesian paradigm, etc. It mentions parametric and nonparametric inference, including hypothesis testing, point estimation, Bayesian inference, decision theory, regression, and even two different approaches to causal inference. The book also paints a fairly whole picture of how the different topics relate to each other and fit into a unified theoretical framework. Another huge strength of this book is that it always omits unnecessary technical details, including only streamlined discussions highlighting essential points.
The main weakness of this book is that certain topics are only brushed upon and not adequately explained. The first two chapters are deep enough for students to get a more or less complete understanding of the important ideas (assuming they do the exercises). But, for example, the 4th chapter covering inequalities is simply a collection of equations and formulas: the text explains how to use them, but not where they come from or what their intuitive interpretation is. This problem arises throughout the book but it is most evident in chapter 4. I want to remark, however, that this problem is widespread in statistics textbooks, and this book is still less lacking in this respect than is common among typical texts.
I'm not sure this book makes the best textbook. In my opinion most students would benefit from a text that offers more explanation of the meaning and driving ideas behind theory. However, I like the way this book gets to the main points quickly and omits confusing and tedious details and irrelevant tangents. This book may be good for students who are briefly studying statistics and will never take a future course. This book is useful as a very basic reference, but I think its best use is for self-study--advanced students will find it one of the quickest and best ways to get an overview of most of the fundamental topics in theoretical statistics.
Honestly, I think Wasserman is an outstanding writer, and part of me wishes he would expand this book to the scale of something like Casella and Berger's "Statistical Inference", covering more material and adding more discussion of certain topics, but retaining the style of being to-the-point and omitting tedious details. I think this is one of the best books of its type out there but I refrain from giving 5 stars because I think Statistics is one area where most of the 5 star books have not yet been written.
This book gives an overview of classical statistics, with an introduction to more modern methods of robust estimation and machine learning. I would say the contents are more focused on practical methods, but the author is always careful to state the necessary theorems from the underlying mathematical foundations of each method. Most of the theorems are stated without proof, although almost each chapter is followed by a short appendix giving some more technical details. Providing a proof for each theorem would take a lot of space and would detract from the applied aspects of this book. What I like is that each chapter has a nice list of references, so an interested reader could go on and explore each subject in more depth with all the mathematical details they need.
The subjects covered is a compromise between the practical side of classical statistics and the modern methods of machine learning. They include convergence, the delta method, point estimation, hypothesis testing and confidence intervals, bootstrap, regression, non-parametric estimation, orthogonal functions, classification, graphical models, and monte carlo for integral evaluation. There is some bayesian estimation, but mostly the book follows a frequentist approach.
I think that this book would be useful only for someone already familiar with classical statistics. It could serve as a good modern reference on statistics and an overview of some methods from machine learning. I do not think that this book is a good source for first exposure to these ideas. Someone should first go through a standard statistics book, such as for example Casella & Berger or Bickel & Doksum. Then this book could server as a "crossover" from that classical material to the modern methods of machine learning. After that the reader can go on to explore machine learning literature on their own, using this book as a guide.
There are a small number of typos throughout the book. They pick up in chapter 22 on classification, where there are some typos in important equations, for example equation 22.21 on Fisher discriminant and the formula for epsilon in theorem 22.17. But overall I had a very positive experience reading this book. It helped me review some stuff I already learned, showed some new applications, and introduced some topics which I look forward to exploring further.
I've spent a lot of time with this book, reading through and working through each example. I've also found problem sets and solutions on the course website for CMU's intermediate stats course that is taught by the author and uses this text (36-705 Intermediate Statistics). Despite what some reviewers say about this book being too dry or lacking background intuition, I still think this is a good book to have if you wish to work through topics in probability all the way up to statistical inference (my goal is to understand this stuff well enough to grok the theoretical underpinnings of machine learning). My advice for getting the most out of this book is to take it very slowly and to work your way through every example. To give you an idea of the pace: I've spent about 3 months part time working through the first 4 chapters. I also recommend cross referencing material when the examples provided are insufficient to understand the material. I've found that they are sufficient about 60-70% of the time. That is ok with me, as I don't have a hard time googling to find supporting examples or materials. At the beginning I took it particularly slow. The idea of random variables was hard to wrap my head around. It's ok though, there are a ton of resources online taking different approaches to explaining the concept. And once it clicks, it's great to come back to the concise theorems of probability laid out in chapter one and continue on. If the book took the time to explain the intuition behind every concept, it would be 2000 pages long.
So this book isn't magic. You won't be able to breeze through it and understand "all of statistics" in a few weeks. But it provides a comprehensive roadmap into key topics, theorems and examples—the best I've found anywhere—and when the book is lacking in explanation or examples, there are easily googleable terms to find more.
In case anyone finds it helpful, I've collected quite a few resources on studying probability and statistics here: (...)
I find it hard to rate the quality of the book. I am from a non-mathematical background (I got no further than calculus in college), and I've been working for three years now on building math skills, especially statistical analysis and inference. I asked a fellow employee (whom I thought I could trust) for a recommendation on a good book for someone with rusty math skills who is trying to learn statistics. This was his recommendation.
This is NOT the book for that purpose. I realized on my first perusal of the book that he was being snide and sarcastic, as I subsequently learned was his custom. This book is a reference, full of complex mathematical notation, that is excellent (so far as I can determine) for reviewing concepts you have already learned and mastered. It is the worst possible choice for someone who is just starting out on learning statistics.
I can now, finally, begin to dip into this book at least in places, and follow the material. So I'm glad, in the end, that I got it. It will eventually prove useful to me.