I'm not one to give an exaggerated eulogy nor rhapsodize about all those "Books with a white cover and a weird picture" -- but I will say I read thinking fast and slow for the first time last year, after decades of resisting, and felt it covered some generally profound ideas that still are relevant as ever and not widely understood.
(Though at some point, maybe the 2nd half of the book, drags on and you can skip most of those chapters. If you don't have time for that, I'm sure chat GPT can give you a taste of the main premises and you can probe deeper from there.)
It’s worth noting that many of the results in Thinking, Fast and Slow didn’t hold up to replication.
It’s still very much worth reading in its own right, but now implicitly comes bundled with a game I like to call “calibrate yourself on the replication crisis”. Playing is simple: every time the book mentions a surprising result, try to guess whether it replicated. Then search online to see if you got it right.
The density of non-replicable results varies by chapter.
You can ignore anything said in chapter 4 about priming for example.
See https://replicationindex.com/2020/12/30/a-meta-scientific-pe... for more.
A fun question especially considering the topic of the thread: are propositions that lack proof necessarily false?
No, but propositions with strong counter-evidence generally are, which is the main topic here. "Not-repicable" generally means "attempted to replicate, but got results inconsistent with the original conclusion."
A very good point (I'm not sure if it's relevant to the book in question, as I haven't read it or if you're referring just about the conversation so far). It seems like many people will take a strong claim they are dubious about, and on finding the evidence is sparse, inconclusive, or missing, swing to assuming that statement is false, instead of a more neutral position of "I have no opinion or some reason to think is unlikely, but others think it is unlikely even if poorly supported or unsupported."
This tendency seems to be capitalized on fairly heavily in political media by finding some poorly supported assertion of the other side to criticize, which causes people to assume the opposite is true.
I'll have you know you just nearly nerd sniped a mathematician ;-)
This is exactly the kind of task that I want to deploy a long context window model on: "rewrite Thinking Fast and Slow taking into account the current state of research. Oh, and do it in the voice, style and structure of Tim Urban complete with crappy stick figure drawings."
Awesome prompt!
Same! Just earlier today I was wanting to do this with "The Moral Animal" and "Guns, Germs, and Steel."
It's probably the AI thing I'm most excited about, and I suspect we're not far from that, although I'm betting the copyright battles are the primary obstacle to such a future at this point.
I would actually like to have books that had "Thinking Fast and Slow" as a prerequisite. Many data visualization books could be summed up as a bar chart is easily consumed by System 1. The visual noise creates mental strain on System 2.
What's wild to me is that anyone could read chapter 4 and not look up the original papers in disbelief.
Long before the controversy was public I was reading that book and, despite claims that the reader must believe the findings, it sounded like nonsense to me. So I looked up the original paper to see what the experiment set up was, and it was unquestionably a ridiculous conclusion to draw from a deeply flawed experiment.
I still never understood how that chapter got through without anyone else having the same reaction I did.
I had exactly this reaction to Malcolm gladwell. It is completely obvious that gladwell across multiple books has never once read one of his references and consistently misrepresents what they say.
In those times, that was exactly the kind of thing that people wanted to believe
is there a 'thinking fast and slow: the reproducible bits' recut? I know with films there's fan made edits.
We need O'Reilly: The Good Parts for books...
Yeah I wouldn't read too much into any single study. But what I would defend vigorously is System1 / System2 distinction as something so clear/fundamental that you can see it constantly once you understand it.
It's been called "emotion / intuition" and "logic" for centuries or millennia before the goofy System name was invented.
Ironically people like System 1/2 more than intuition/logic because the terms sound more like they are coined by System 2.
It’s also very common in psychology theories, I haven’t read “Thinking, fast and slow” but I imagine there’s more than Kahneman’s own papers cited: https://en.wikipedia.org/wiki/Dual_process_theory
wow, it looks like "dual process" theory is basically the same thing.
I don't know if there's a better text on dual-process theory out there (perhaps by the original authors), but regardless of who originated it, I think it's something worth learning about for everyone (and if you don't have a better source then Thinking Fast and Slow is a very good one).
In software we often call it fastpath and slowpath :)
It's just such a bad name though.
That's not him though.
Like, it was in all my cog psych textbooks more than twenty years ago, with cites back in the 80s (which weren't him).
This is my favourite paper of theirs: http://stats.org.uk/statistical-inference/TverskyKahneman197...
I got into a bunch of trouble with some reviewers of my thesis for referencing this repeatedly.
... except the distinction was being made in various forms long before Kahneman, and does get questioned. When you start to poke at it, what's intuitive starts to seem less so.
https://journals.sagepub.com/doi/full/10.1177/17456916124606...
(that's a link to a defense of dual process theories, but it makes clear there's increasing criticism of them)
I wonder if it's better to have a lot of small hits or a few big hits and many misses in regard to replication. If the studies which have the greatest implications replicate, then maybe many misses is not that bad.
That's an interesting theoretical question.
Unfortunately the reality is that the more interesting and quotable the result is, the less likely it is to replicate. So replication problems most strongly hit things that seem like they should have the greatest implications.
Kind of a "worst of all worlds" scenario.
And critically, scientific publications are incentivized likewise to publish the most outlandish claims they can possibly get away with. That same incentive affects individual scientists, who disproportionately submit the most outlandish claims they can possibly get away with. The boring stuff -- true or not -- is not worth the hassle of putting into a publishable article.
And then the most outlandish of these are picked up by popular science writers. Who proceed to mangle it beyond recognition, and add a random spin. This then goes to the general public.
Some believe the resulting garbage. And wind up with weird ideas.
Others use that garbage to justify throwing out everything that scientists say. And then double down on random conspiracy theories, denialism, and pseudoscience.
I wish there was a solution to this. But everyone is actually following their incentives here. :-(
The scientists push it on the pop writers, to created a Personal Brand and an industrial complex around their pet theory.
psychology isn’t science. it’s a grave mistake to read/interpret it a such. does that mean it’s useless? of course not: some of the findings (and i use findings very loosely) help us adjust our prior probabilities. if we’re right in the end, we were lucky. otherwise we just weren’t.
That's an unexpected position for me.
How do you define science? Could it be a science, according to you, or is there something fundamentally non-scientific about it?
it’s fundamentally unscientific at this point. much of our current science lies in the realm of natural law. so far we haven’t found any laws that govern human behavior. what we know, with considerable certainty, is that behavior can be positively influenced. but at the point of action, nothing we know of compels any specific/predictable behavior. until we have found rigid laws of reasons that apply to both the brute and the civilized, any ‘discoveries’ of psychology are reports of someone’s idiosyncrasies, imho.
For what it's worth, Kahnman answered a post that scrutinized the effect of priming: https://replicationindex.com/2017/02/02/reconstruction-of-a-...
Thanks for sharing this -- I read the book maybe a decade ago and largely discounted it as non-replicable pop-sci; this changed my opinion of Kahneman's perspective and rigor (for the better!)
The general idea is very simple. Tactical vs strategic thinking are two different things and it’s good to be aware of that. I don’t know that that needs to be proven or disproven
19th Century definition of tactics aas being everything that happens within the range of cannons and strategy as everything that happens outside of cannon range, fits well to thinking fast (tactics) and slow (strategy).
Irony is, Kahneman had himself written a paper warning about generalizing from studies with small sample sizes:
"Suppose you have run an experiment on 20 subjects, and have obtained a significant re- sult which confirms your theory (z = 2.23, p < .05, two-tailed). You now have cause to run an additional group of 10 subjects. What do you think the probability is that the results will be significant, by a one-tailed test, separately for this group?"
"Apparently, most psychologists have an exaggerated belief in the likelihood of successfully replicating an obtained finding. The sources of such beliefs, and their consequences for the conduct of scientific inquiry, are what this paper is about."
Then 40 years later, he fell into the same trap. He became one of the "most psychologists".
http://stats.org.uk/statistical-inference/TverskyKahneman197...
His own work held up very well to replication. It's when he is citing the work of other scholars (in particular, that of social psychologists) that doesn't hold up well to replication.
“When I describe priming studies to audiences, the reaction is often disbelief . . . The idea you should focus on, however, is that disbelief is not an option. The results are not made up, nor are they statistical flukes. You have no choice but to accept that the major conclusions of these studies are true.”
I find that most nonfiction books follow a common structure:
* 1st third of the book: Lays out the basic ideas, gives several examples
* 2nd third of the book: More examples that repeat the themes from the 1st part
* 3rd third of the book: ??? I usually give up at this point
I sometimes wish that more books were like "The Mom Test" - just long enough to say what they need, even if that makes for a short book.
Because most nonfiction books are one, relatively small set of ideas (profound or not, novel or not) that could be concisely written as a few blog posts or a single long form article, but in order to monetize and build the author’s brand, get exaggerated into a full book. It is really painful and something I hope GPT will help the rest of us cut through in the future (“summarize the points. Dig into that point. What evidence is given? Etc etc etc” as a conversation rather than wasting 30 hours reading noise for a book)
That's exacerbating the original environmental problem, in addition to thick paper books, filled with filler material just to promote the author's brand, you now want to waste electricity on running an LLM that will give you a the short version? That's.... short sighted.
This should be dealt with by pressuring the publishing industry not to inflate books and fill them with fluff. This could be done by not buying these kind of books, and publicly shaming publishers who engage in this behavior. It's easier in non fiction books since the amount of fluff in fiction books is a more subjective matter.
Except that a lot of people read these books for the entertainment value of the anecdotes, and a lot of people enjoy feeling self-important for having read long books.
Most people don’t absorb concepts immediately with only a simple explanation.
Unless you are reading a topic you are already familiar with, reinforcement of an idea helps you to examine a concept from different angles and to solidify what is being discussed.
If everyone fully absorbed and understood everything they read, schooling could be completed years in advance.
You can just leverage the “second brain” crowd — for every vaguely well-known non-fiction book someone has written up a summary for themselves and posted it on their blog.
Well, life of most people on the earth is just same boring repetitions with few novel events. So I wonder what would people do with their ample amount of saved time thanks to ChatGPT. Perhaps writing another Javascript framework, launch new food delivery apps besides raging on social media.
I sometimes wish that more books were like "The Mom Test" - just long enough to say what they need, even if that makes for a short book.
most non fiction could be well-summarized as a lengthy blog post
I'd go further: many non-fiction books could be losslessly compressed into a tweet.
(Looking at you, The Checklist Manifesto)
Reading a book, say 10 hours, is like a meditation on an idea: you get numerous examples of it and a variety ways of thinking about it.
Our brains learn best when they encounter something often across time (spaced repetition).
Reading a single tweet may summarize the book, but the chances of you recalling the idea in an appropriate situation is much lower than if you had spent hours on it.
Mainly a business and self-help “airport book” problem. Sometimes pop-science.
Counterexample: Gilles Deleuze's "Empiricism and Subjectivity".
Let me summarize: the highest scored comment on hacker news to the death of Daniel Kahneman says the second half of his book can be replaced by an automated plagiarism machine.
Y'all are hopeless and deserve what's coming for you. The only problem is, I will also be buried under it and so will everyone else but that can't be helped, it seems.
Old man yells at clouds
I've tried to read this book over and over again to understand what everyone is talking about but never found the insights that useful in practice. Like, what have you been able to apply these insights too? What good is it to know that we have a slow mode of thinking and a fast way? Genuine question.
It’s the only one of those books I still don’t regret telling people “it’s good” a decade later (with a couple caveats)
The Undoing Project is a solid read about his life and work too.