The limits of first principles thinking, and why Descartes is an a**hole

First principles – the idea that, from a basic set of simple ‘known’ truths, we can build a true understanding of a complex topic – is something that has been kicking around in western philosophy for a while. It’s a pretty inviting idea, in a world filled with irrational beliefs, it seems like a nice way of building an unbiased understanding of objective truths. It’s been given a special place in the EA movement (William Macaskill, one of the main founders of the movement, particularly speaks its praises), but I’ve been worrying a lot about the implications of it recently. In short, I think it’s dangerous: it may work well for simple topics, but for the complex issues that EA works with I believe it only serves to blind ourselves to our biases.

Let’s go back to one of the early, more famous examples of this thinking: Descartes. In Meditations, Descartes famously feared that his belief in Catholicism may be incorrect, and so he attempted to re-build an understanding of his faith by starting from first principles, believing that only by starting with strong priors could he build a strong understanding of the truth. This is, in theory, a great idea – and so, Descartes started with but one simple truth: “I think, therefore I am” and the ensuing proposition “I am, I exist”. From this one simple truth as his strong foundation, he, over the course of dozens of incredibly boring pages, somehow manages to conclude that his original belief (Catholicism) is 100% correct. Convenient, right?

This anecdote reveals a major problem with first principles thinking: theoretically, the benefit of reasoning from first principles is that you can use logic to achieve truth without letting your biases or preconceived notions come into play. It doesn’t usually work out this way though. Descartes’ case is a really nice example of this: it’s pretty clear that a lot of his preconceived notions and biased slipped in along the way – if reasoning from first priors magically validates your original beliefs, I think it’s safe to say that you messed up along the way.

The worst part of it was that Descartes came away from this mental exercise with a newfound certainty that his beliefs are rational and true – and he could from then on point to this ‘rational’ proof that verifies his position. From an outsider’s perspective, it’s so incredibly clear that his biases are lying just behind the surface of this false veneer of ‘rationality’, but now Descartes can pretend that those biases don’t exist, because he fooled himself into thinking that he could think in an unbiased way.

This is my worry for first principles thinking in the EA movement. As humans, our thought processes are naturally biased in one way or another, and no mental or philosophical trick can fix that. We should strive to think as unbiased as possible, but we should never be content to believe that we’ve come to an unbiased conclusion – first principles thinking and other rationalist methods can help mitigate bias, but by believing that it leads to fully unbiased conclusions, we risk blinding ourselves to the bias that will inevitably creep into our conclusions.

Most importantly, when we completely blind ourselves to the bias in our thoughts, we also lose our ability to integrate and respond to criticisms – and this is what has the strongest implications for EA. Let’s assume, for a moment (I’ll probably expand on this assumption in a future post), that this issue of the ‘first principles’ thinking gilding over our biases and blinding ourselves to them applies not just to ‘first principles’ thinking, but to the full gambit of epistemologies considered to be ‘rational’ epistemologies in the EA movement. Plenty of people have criticized the thoughts and ideals of various Effective Altruists, often from outside the framework of what is considered by EAs to be ‘rational’ arguments. If we dismiss these criticisms offhand because they do not conform to our vision of ‘rational’ thought, we put ourselves in a dangerous and insular intellectual bubble. The topics that EA contends with are simply far too complex to be dealt with through our narrow epistemologies alone – we need the help of our critics to root out the biases in our thinking and integrate information that lies beyond the reaches of our concepts of rationality.

In short, I do not believe that we can receive help by forcing our critics to play our game (“yes that’s an interesting point, but can you put it in terms of a first-principles-based cost/benefit analysis?”), we need to be ready to relax our frameworks and expand beyond narrow concepts of ‘rationality’ – only then will we be able to even begin to be honest with ourselves about where our biases play into our thought processes. EA has some extreme ideological and demographic diversity issues, and it’s not beyond reason to assume that the biases we share are reinforcing each other under the surface of our ‘rational’ thought.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s