Community Reviews
I think it's easy to criticize a book like this one. It's very simple. It presents ideas without fully showing their derivation or presenting a long list of instances in which the idea may not apply.
Right when you expect to see complexity, like in the chapter on probabilistic thinking, it gives you an anecdote and then simply stops.
It aspires to be academic, but lacks the academic's penchant for detail, footnotes, and careful situation of material within established traditions. It's the sort of under-cooked, overconfident treatment that could lead you to say "well the author can't possibly think it's that simple, can he?"
Well, that's exactly why I like the book. Parrish isn't trying to be complete, exhaustive, or nuanced. He is trying to provide a set of useful tools we can use to approach problems. He helps us understand those tools through a few relevant examples. He provides two or three useful pieces of complexity around each tool that give us a scaffold on which we can hang observations. And then he moves on. After he does this eight times, he simply ends the book. There's a ton of air and I think that's how he meant it. The air is where we're supposed to be thinking about the tools and applying them, rather than losing ourselves in the endless pursuit of a more nuanced understanding of something we aren't ever really going to apply.
I'm going to hang the list up in my office and see how often I choose to pick up one of these tools. I have a feeling that I will get more out of it than books many times the length with many more footnotes.
Here is my overview of the book:
0) Why learn mental models?
+ Life is complex.
+ In that complexity, it can be hard to see reality:
++ We have a limited perspective
++ We have a tendency to interpret things in our favor (ego)
++ We have distance from the effects of our decisions (e.g., complex organizations)
+ A small number of tools help simplify that complexity and better understand reality, given our limitations.
1) The map is not the territory
+ Maps reduce complex things to abstractions, but inherently introduce limitations. Beware of those limitations.
+ The best way to use a map is to know where it does not apply (e.g., Newtonian and quantum physics)
+ Cartographers influence maps and similarly maps influence territories
2) Circle of competence "A little knowledge is a dangerous thing"
+ A lifer will always understand nuance that a stranger does not
+ Humans are pattern recognition machines and as soon as we know anything, we tend to think we understand something. We only relinquish that belief when disastrous results demand it.
+ Recognize where you actually have a circle of competence (e.g., you can make quick and accurate predictions, you know what is knowable and not knowable) and maintain that circle (understand your limits, understand how things are changing)
3) First principles thinking
+ How to do it (a) socratic method; (b) five whys
+ Unique ways it helps (a) identifies faulty, but widely believed, assumptions (b) can generate a paradigm shift
4) Thought experiment
+ Steps: (a) ask a question; (b) research; (c) form a hypothesis; (d) test the hypothesis; (e) analyze outcomes; (f) compare and adjust
+ Uses (a) reimagine history; (b) imagine physically impossible; (c) intuit the non-intuitive
5) Second order thinking
+ Prioritize actual long-term interest (as opposed to the first thing that happens)
+ Build better arguments by fully thinking through consequences
+ However, avoid getting lost in slippery slope thinking that assumes too much on behalf of rational actors
6) Probabilistic thinking
+ Bayesian thinking - update a base rate, rather than treating new information in isolation
+ Fat tailed probabilities - are extreme outcomes more likely than in a normal distribution?
+ Asymmetries - is there a difference in the upside and downside of your estimate? Does your bias matter?
* Anti-fragility - design systems to avoid takeout, learn and change rather than optimizing for the one scenario you think is most likely
7) Occam's razor
+ If you really don't know the answer, a simple theory is better
+ Simple theories assume less and thus introduce into your theory fewer arguments likely to be false
8) Hanlon's razor
+ When you see a behavior you view negatively, first assume it is a result of ignorance or laziness
+ Reserve judgments of malevolence to situations where you can observe a behavior over an extended period of time (e.g., more than a year), in a variety of situations, and ideally when you can understand the subject's intention through direct conversation
Right when you expect to see complexity, like in the chapter on probabilistic thinking, it gives you an anecdote and then simply stops.
It aspires to be academic, but lacks the academic's penchant for detail, footnotes, and careful situation of material within established traditions. It's the sort of under-cooked, overconfident treatment that could lead you to say "well the author can't possibly think it's that simple, can he?"
Well, that's exactly why I like the book. Parrish isn't trying to be complete, exhaustive, or nuanced. He is trying to provide a set of useful tools we can use to approach problems. He helps us understand those tools through a few relevant examples. He provides two or three useful pieces of complexity around each tool that give us a scaffold on which we can hang observations. And then he moves on. After he does this eight times, he simply ends the book. There's a ton of air and I think that's how he meant it. The air is where we're supposed to be thinking about the tools and applying them, rather than losing ourselves in the endless pursuit of a more nuanced understanding of something we aren't ever really going to apply.
I'm going to hang the list up in my office and see how often I choose to pick up one of these tools. I have a feeling that I will get more out of it than books many times the length with many more footnotes.
Here is my overview of the book:
0) Why learn mental models?
+ Life is complex.
+ In that complexity, it can be hard to see reality:
++ We have a limited perspective
++ We have a tendency to interpret things in our favor (ego)
++ We have distance from the effects of our decisions (e.g., complex organizations)
+ A small number of tools help simplify that complexity and better understand reality, given our limitations.
1) The map is not the territory
+ Maps reduce complex things to abstractions, but inherently introduce limitations. Beware of those limitations.
+ The best way to use a map is to know where it does not apply (e.g., Newtonian and quantum physics)
+ Cartographers influence maps and similarly maps influence territories
2) Circle of competence "A little knowledge is a dangerous thing"
+ A lifer will always understand nuance that a stranger does not
+ Humans are pattern recognition machines and as soon as we know anything, we tend to think we understand something. We only relinquish that belief when disastrous results demand it.
+ Recognize where you actually have a circle of competence (e.g., you can make quick and accurate predictions, you know what is knowable and not knowable) and maintain that circle (understand your limits, understand how things are changing)
3) First principles thinking
+ How to do it (a) socratic method; (b) five whys
+ Unique ways it helps (a) identifies faulty, but widely believed, assumptions (b) can generate a paradigm shift
4) Thought experiment
+ Steps: (a) ask a question; (b) research; (c) form a hypothesis; (d) test the hypothesis; (e) analyze outcomes; (f) compare and adjust
+ Uses (a) reimagine history; (b) imagine physically impossible; (c) intuit the non-intuitive
5) Second order thinking
+ Prioritize actual long-term interest (as opposed to the first thing that happens)
+ Build better arguments by fully thinking through consequences
+ However, avoid getting lost in slippery slope thinking that assumes too much on behalf of rational actors
6) Probabilistic thinking
+ Bayesian thinking - update a base rate, rather than treating new information in isolation
+ Fat tailed probabilities - are extreme outcomes more likely than in a normal distribution?
+ Asymmetries - is there a difference in the upside and downside of your estimate? Does your bias matter?
* Anti-fragility - design systems to avoid takeout, learn and change rather than optimizing for the one scenario you think is most likely
7) Occam's razor
+ If you really don't know the answer, a simple theory is better
+ Simple theories assume less and thus introduce into your theory fewer arguments likely to be false
8) Hanlon's razor
+ When you see a behavior you view negatively, first assume it is a result of ignorance or laziness
+ Reserve judgments of malevolence to situations where you can observe a behavior over an extended period of time (e.g., more than a year), in a variety of situations, and ideally when you can understand the subject's intention through direct conversation
See why thousands of readers are using Bookclubs to stay connected.