X-Risks and RCTs: Opposite ends of a spectrum?

I’ve been reading Caroline Fiennes’ book “It ain’t what you give, it’s the way that you give it” (which I highly recommend!), and I came across a very interesting chart that has gotten me thinking about the spectrum of activities (especially of X-risks) in the EA movement. The chart, which was originally developed by New Philanthropy Capital, focuses on the implications of the scope of a charity’s work. I’ve adapted it a bit (i.e. just changed the examples on the left) so that it more readily applies to EA concerns below:

The Pyramid of Charitable Work: Certainty, Return, and Attribution

scope-pyramid

Essentially, at the top of the pyramid, you have direct support programs – things like direct provision of medicine, cash, etc. For most of these programs, you can be pretty sure they work, they have reliable returns, and they’re pretty easy to test via evaluation. Importantly, it’s also relatively easy to compare the effectiveness of them. As you go down the pyramid, you start getting further removed from the individual, and focus more on the systems that impact the individuals. Efforts closer to the base of the pyramid are more uncertain (how do you know you’ll be able to change that law, or norm, etc.), are often very difficult to test, but since they focus on a systems-level they tend to (when they succeed) have large and sustainable impacts. Further down the pyramid is often high-risk, high-reward work.

The original bread-and-butter of EA is usually the programs that are most testable and easiest to estimate the effectiveness of – these are concentrated overwhelmingly at the top of the pyramid. Some in EA mention the importance of changing laws and other system-level changes – although it has started to get a lot of traction lately, such as during EA Global 2016 – but it’s not yet given quite as much mention as more test-able work. As such, system-level is sometimes neglected in EA discourse (Fiennes argues that system-level work is neglected among donors broadly – something that EAs arguing for system-level work have noticed as well).

If we go really far down the pyramid, we get X-risks (which weren’t in the original chart, but it works pretty well in this metaphor). Work on X-risks is hugely uncertain – there’s a very high chance that the work will amount to nothing. Also, when they do succeed, they have high returns – saving the human race, basically. Lastly, they are the least testable, for obvious reasons. These are portrayed as high-risk, high-return, and neglected within the EA community – perfect for investment by EAs.

However, EA seems to be bifurcating itself, which is worrying – all in all, the main discourse seems to concentrate on either very testable work (very top of pyramid), or on very high-risk high-return work (very bottom of pyramid) – but what about the rest of the pyramid? Work focusing on changes to laws and norms is getting some traction, but what about charitable work that is just a little bit more difficult to evaluate, because it focuses beyond the individual level, such as at the community level or includes diffuse and varied effects? Examples could include governance work, efforts to change norms surrounding violence in homes or schools, and so on. These are important, and are also often neglected.

So how would we attempt to fill out the middle of the pyramid? One way of doing this could be linking up with other social movements aiming for systems-level change (something I’ve argued for in this post), or exploring new ways of valuing programs that aren’t amenable to normal evaluation methods (such as those that have diffuse and varied impacts). Either way, it’s something we need to explore more in the movement, especially if the evaluation and logical tools we use prove to be insufficient for work in the middle of the pyramid.

Advertisements

Against moral and ideological purity

I just finished reading Stranger’s Drowning (which I would definitely recommend), and couldn’t help but notice the theme of ‘purity’ that played into the thoughts of the ‘do-gooders’ in the book. Clean, simple, and pure moral schema and ideologies are definitely something that are well respected among Effective Altruists – how else can you avoid irrational thinking and hypocrisy? I’m not too convinced though, I think ‘pure’ morality or ideology attempts to oversimplify things that, by nature, are too complex to be simplified, and only put us in a position to feel overconfident in opinions that we believe, often erroneously, to be ‘rational’ (which is similar to the thoughts I touched on briefly in a previous post about ‘first principles’ thinking).

So you’re a staunch consequentelist or deontologist or utilitarian or whatever. Great. What’s this mean for your messy daily life?  Let’s say you found out some dirty little secret, for example, what do you do? How do you know whether it’s right to, say, tell someone’s significant other about their affair? Or call the cops on a friend or relative’s crime? Or attempt to commit an unstable friend or relative? Or how to intervene when someone you know might be the victim of abuse? Note that the uncertainty makes it difficult to use a lot of the clearer-cut moral schema –  how can you rely on consequentialism or utilitarianism when you don’t know for sure what impact your actions or inactions will have?

There are so many different moral schema that can come into play in these situations, some formal, some less so. In the end though, you need to make a value judgment – maybe one moral system says “generally things work out for the best if you let people make their own decisions”, or “generally things work out if transgressors are punished”, or “generally things work out if you protect your own and don’t bring in outsiders”. Each moral schema is right a certain fraction of the time, and it’s up to you to determine which action, informed by these schema, is most likely to be the ‘correct’ action. This judgment is something that we build over time, not something we can quickly learn from a book: no moral schema can lead you in the right situation 100% of the time, especially in cases where uncertainty exists.

So, what does this mean for EA? A lot of the ‘popular’ moral positions in EA might be good for the questions that concern EA, but they don’t really tell us how to be good people to those around us. As a result, I fear that this aspect is undervalued in a lot of EAs, especially younger and more individualistic ones. I’m sorry, but all of the philosophy books in the world are not going to be able to tell you how to be a good significant other, or sibling, or friend, or parent. Strict and pure and clear-cut morals or ideology are antithetical to the messiness involved in being around and interacting with loving other average human beings.

There’s one specific example from Stranger’s Drowning that stood out to me as a clear example of this issue: an animal rights activist who dedicated every possible second to saving lives. When his girlfriend at the time would ask him to help clean up, he would refuse, stating “that time spent washing dishes could be time spent working for animal rights”. From a utilitarian standpoint, this sort of thinking makes sense, but it’s incredibly problematic from human (especially feminist) standpoints. While this is an extreme example, smaller versions of this are incredibly common – blowing off time with friends in family in order to spend more time working included. I worry that something is lost from living this way.

Now, of course, this is where some people will go “but it’s bad to put those close to us on a moral pedestal! All humans are equal, and prioritizing personal relationships distracts from this fact.” This is true in part, but I firmly believe that something is gained through personal relationships that cannot be gained from philosophy or rationality. Understanding the messiness of human interactions is something you have to understand if you are going to effectively work within or alongside human structures and relationships. If all you understand is overly simplistic and clear-cut philosophies and tools, you’re going to have a hard time making any positive change in the world.

Note: After publishing this post, I realized that Nick Bostrom took this topic on in a post from 2009 about the idea of a ‘Moral Parliament‘. For those interested in a different and more formal exploration of solving the ‘moral purity’ issue in EA, it’s definitely worth a look.

Are traditional social movements overlooked in Effective Altruism?

There’s a lot of social movements against marginalization going on in the US today: Black Lives Matter, various feminisms, LGBTQ movements, etc.. I’ve seen some discussions among EAs about the marginal benefit of participating in some of these movements, and I’ve noticed that most of these discussions revolve around societal outputs of participation – what is the marginal societal benefit of participating, are they effective, what could we do to help, is it ‘worth’ my time, etc. – but there’s very little talk about how EAs themselves can benefit from participating.

Overall, I think that there’s something gained through participating in and learning from traditional social movements that cannot be learned elsewhere. In particular, I think it can help with the EA movements’ diversity issues, while simultaneously improving the effectiveness of Effective Altruists themselves. The EA movement consists overwhelmingly of white, educated, males with certain ideological predilections (e.g. utilitarianism). What if the diversity issue is partially caused by a collective tone-deafness to problems that cannot be solved by EA, such as the problems of oppression? Utilitarian ethics and such can tell us that everyone is equal, but systems of oppression are, by definition, complex social systems, and the main tools used by EA are simply inadequate to address oppression, and we’re putting ourselves in a pretty awkward position by ignoring other toolsets and other activists.

So, what do we do? My suggestion is that more EAs participate in and learn from other activists who are specifically fighting against systems of oppression. The difficult part is taking a back seat – try to simply listen and learn, and draw as little attention to yourself as possible. I think it’s safe to say that a lot of EAs are very trigger happy to help and apply their knowledge to problems, but traditional social movements are complex – any ‘simple’ solutions you come up with using the tools you bring to the table (e.g. thinking on the margin, cost analysis, philosophy, etc.) are most likely going to be completely incorrect. You should not participate in order to ‘fix’ the social movement, you should participate to ‘fix’ yourself and your ways of thinking.

If you want simple low-energy entry points, maybe just keep an open mind and mill around on some pop-activists websites, like Everyday Feminism, Laci Green, or Feministing, or whatever. Maybe ask friends in these movements for learning materials. Or maybe read some seminal texts on system of oppression, like by Audre Lourde, Frantz Fanon, Vijay Prashad, Paulo Freire, or newer more accessible texts like by Chimamanda Ngozi Adichie (also, watch her TED talks!), Roxane Gay, or Ta-Nehisi Coates. At the end of the day, just make an active effort to learn and grow – the benefits may be greater than you’d think.

The limits of first principles thinking, and why Descartes is an a**hole

First principles – the idea that, from a basic set of simple ‘known’ truths, we can build a true understanding of a complex topic – is something that has been kicking around in western philosophy for a while. It’s a pretty inviting idea, in a world filled with irrational beliefs, it seems like a nice way of building an unbiased understanding of objective truths. It’s been given a special place in the EA movement (William Macaskill, one of the main founders of the movement, particularly speaks its praises), but I’ve been worrying a lot about the implications of it recently. In short, I think it’s dangerous: it may work well for simple topics, but for the complex issues that EA works with I believe it only serves to blind ourselves to our biases.

Let’s go back to one of the early, more famous examples of this thinking: Descartes. In Meditations, Descartes famously feared that his belief in Catholicism may be incorrect, and so he attempted to re-build an understanding of his faith by starting from first principles, believing that only by starting with strong priors could he build a strong understanding of the truth. This is, in theory, a great idea – and so, Descartes started with but one simple truth: “I think, therefore I am” and the ensuing proposition “I am, I exist”. From this one simple truth as his strong foundation, he, over the course of dozens of incredibly boring pages, somehow manages to conclude that his original belief (Catholicism) is 100% correct. Convenient, right?

This anecdote reveals a major problem with first principles thinking: theoretically, the benefit of reasoning from first principles is that you can use logic to achieve truth without letting your biases or preconceived notions come into play. It doesn’t usually work out this way though. Descartes’ case is a really nice example of this: it’s pretty clear that a lot of his preconceived notions and biased slipped in along the way – if reasoning from first priors magically validates your original beliefs, I think it’s safe to say that you messed up along the way.

The worst part of it was that Descartes came away from this mental exercise with a newfound certainty that his beliefs are rational and true – and he could from then on point to this ‘rational’ proof that verifies his position. From an outsider’s perspective, it’s so incredibly clear that his biases are lying just behind the surface of this false veneer of ‘rationality’, but now Descartes can pretend that those biases don’t exist, because he fooled himself into thinking that he could think in an unbiased way.

This is my worry for first principles thinking in the EA movement. As humans, our thought processes are naturally biased in one way or another, and no mental or philosophical trick can fix that. We should strive to think as unbiased as possible, but we should never be content to believe that we’ve come to an unbiased conclusion – first principles thinking and other rationalist methods can help mitigate bias, but by believing that it leads to fully unbiased conclusions, we risk blinding ourselves to the bias that will inevitably creep into our conclusions.

Most importantly, when we completely blind ourselves to the bias in our thoughts, we also lose our ability to integrate and respond to criticisms – and this is what has the strongest implications for EA. Let’s assume, for a moment (I’ll probably expand on this assumption in a future post), that this issue of the ‘first principles’ thinking gilding over our biases and blinding ourselves to them applies not just to ‘first principles’ thinking, but to the full gambit of epistemologies considered to be ‘rational’ epistemologies in the EA movement. Plenty of people have criticized the thoughts and ideals of various Effective Altruists, often from outside the framework of what is considered by EAs to be ‘rational’ arguments. If we dismiss these criticisms offhand because they do not conform to our vision of ‘rational’ thought, we put ourselves in a dangerous and insular intellectual bubble. The topics that EA contends with are simply far too complex to be dealt with through our narrow epistemologies alone – we need the help of our critics to root out the biases in our thinking and integrate information that lies beyond the reaches of our concepts of rationality.

In short, I do not believe that we can receive help by forcing our critics to play our game (“yes that’s an interesting point, but can you put it in terms of a first-principles-based cost/benefit analysis?”), we need to be ready to relax our frameworks and expand beyond narrow concepts of ‘rationality’ – only then will we be able to even begin to be honest with ourselves about where our biases play into our thought processes. EA has some extreme ideological and demographic diversity issues, and it’s not beyond reason to assume that the biases we share are reinforcing each other under the surface of our ‘rational’ thought.

Problems with shrinking your identity

Note: This post originated as a long email to Scott Weathers about his post “The Limits of Ideology and Identity in Social Change”, which makes reference to the post about Keeping your Identity Small. At Scott’s suggestion, I’ve turned it into a blog post. It’s very long, which I apologize for, but it’s an important and complex topic, and I feel like shortening it any further would be a disservice.

The short take-aways, since the full post is way too long:

Even though it was originally posted in 2009, I only very recently learned about Paul Graham’s post about Keeping your identity small, which has been making the rounds in the larger Effective Altruism (EA) community for a while now. In short, Graham, a computer scientist, makes the argument that when an opinion or belief of ours becomes part of identity (e.g. our political identity, religious identity, etc.), we tend to stop questioning the validity of the opinion, and instead focus on shoving that opinion down other people’s throats. His conclusion is that, in order to keep our opinions as non-ideological as possible, we should shrink our identity as much as possible.

While I agree that we should be less ideological (life’s too complicated to be sure of anything), the way forward that Graham suggests is dangerous. There are hundreds of years of literature on the topic of the intersection between identity and privilege and oppression that this premise ignores completely, and the message is clear: identity can be a powerful tool for social change. People with marginalized identities (e.g. female, black, LBGT) cannot afford to ‘shrink their identity’, given that it is a source of power in fighting against oppression – historically, various versions of ‘shrinking of identity’ has been used to silence and oppress marginalized groups. When it comes to people with privileged identities (e.g. male, white, heteronormative), it’s a bit different, but my fear is that shrinking our perceptions of our own privileged identities only serves to blind us to the biases that the identities have placed in our thought-processes from a young age.

When members of a large group overwhelmingly shares privileged demographic characteristics (like Effective Altruism, which is overwhelmingly white, urban, male, and educated), my fear is that this micro-level problem can become a macro-level problem. Are there biases that the majority of Effective Altruists share that we’re collectively blinding ourselves to? It’s difficult to tell without having a more open conversation about privilege and identity within the movement, which is why pushing this conversation further (instead of retreating from it, which I fear is what ‘keeping identity small’ does) is absolutely necessary in order to ensure that the movement is as effective as possible in doing good in this world.

And now, for those of you with the stamina to read more, here’s my logic:

Identity has played a huge role in multiple social movements, uniting disparate people in a common cause, united by a recognition that a certain type of identity is being sidelined in the larger socio-economic-political context – black lives in the civil rights movement and Black Lives Matter, LGBTQ lives in the LGBTQ rights movement, women in multiple women’s rights movements globally, non-human animal lives in the animal rights movement (the entire phrase ‘non-human animals’ is an identity-based sticking point), national identities in various political conflicts (keeping in mind that not all nationalism is bad – in multiple postcolonial states where ethnic conflict exists, nationalism (or pan-African identify or ‘postcolonial’ identity, for that matter), even in its very limited amounts can help unite people in positive ways – the ongoing ‘this flag’ movement in Uganda is an interesting case).

Now, of course, in each of these movements there were also people of different intersecting identities that got the short end of the stick in these identity-driven movements – many women in the civil rights movement were silenced in the name of ‘black unity’ (Audre Lorde is a good source on this); even today bisexual people have trouble finding a place in the LGBTQ movement (example); genderqueer women, transsexual women, non-hetero women, and women of color consistently are sidelined in women’s rights movements (Lorde again, she’s amazing); and national identities can be used to silence dissenters of any type. In the best of cases, these cases consisted of silencing the concerns of multiple members of the movement, in the worst of cases the ensuing dehumanization (“you are a danger to the cause!”) led to violence.

In each of these cases, though, the issue was not identity itself – it is the erasure of identity. People, in trying to build a clean and well-defined identity of any type, tend to silence the voices of those who differ from them but share the ‘cause’ identity. This is a very well documented problem. Probably the clearest writing about it comes from ‘bisexual erasure’ in the LGBTQ movement, if you want to find a quick case study, but it’s a very common problem overall. Another topical example would be the erasure of non-male-white voices in the Bernie or Occupy movements.

In these contexts, playing up the idea of ‘shrinking your identity’ only worsens this problem – it enables erasure. When members of each of these movements attempted to ‘shrink’ the identities of other members of the moment, they erased or silenced portions of other members’ identities, often with disastrous results. Many others, who I agree with, instead believe that celebrating diversity and intersectionality within identities is the way forward – instead of simplifying an entire identity into one experience (e.g., the ‘black’ experience, the ‘female’ experience, the ‘gay’ experience, etc.), we need to celebrate the commonalities and differences within and across identities (e.g., the ‘black’ experiences are diverse but share commonalities, the ‘female’ experiences are diverse but share commonalities, etc.). Unfortunately, nuance and diversity are never easy sells, but activists are making inroads in these regards (slowly, unfortunately).

Now, of course, you’ll note that all of the above examples are about marginalized identities – what about privileged identities? I, as well as Graham, are white, male, educated, cis-gender, heterosexual, overall society-conforming American individuals. The identities we were born into, that shaped us from a very young age, are identities with bloody and horrific histories. We benefit from these bloody and horrific histories – the system is rigged in our favor. The question is, how do we deal with having identities that are antithetical, in one way or another, to our values? (one value, for example, being believing that oppression is bad – duh)

A lot of people in our situation attempt to distance themselves from their identities. There’s two general ways that I’ve seen people do this – the first is leaning into a non-privileged identity they have – e.g., if you are black but have male privilege, ignoring gender while focusing on race; if you are female but have white privilege, ignoring race while focusing on gender. The cognitive dissonance involved (being privileged in one way, while knowing the pain that lack of privilege creates), often leads to the aforementioned examples of erasure – ‘what do you mean my white privilege is clouding my judgment, I’m not privileged because I’m identity [female, poor, LGBTQ, etc.], that’s what we’re working toward!’ My least favorite person I’ve met with this line of thought was a white rich guy who was bisexual, who would constantly say racist shit and then get defensive when people tried to call him out on it because ‘we’re all in the fight against privilege together’, but there are many less egregious examples as well.

The second way of running from privileged identity is the one that is most relevant to Graham’s discussion of ‘shrinking your identity’, which I like to call the ‘citizen of the world’ or ‘human identity’ method. Basically, the idea is that we should remove, as much as possible, the influence that our problematic identities have on us, and try to build solidarity with all individuals of the world (or only marginalized individuals, there’s multiple flavors of this method). Building solidarity and minimizing the negative impacts of our problematic identities is great, but I don’t think that downplaying the role that these identities have on our lives is the best way forward. You can minimize the impacts without denying the role that these identities have on you, and, as a converse, you can down-play your problematic identities without minimizing the negative impact they have on your thought processes. In fact, downplaying your negative identities may make it harder to identity these negative impacts – in its worst iterations, it becomes ‘colorblindness’ or ‘gender blindness’ (I don’t see race, I just see people!), which, as we should all know by now, doesn’t really work out that well.

For the sake of transparency and for the sake of making it easier to root out problematic thoughts, those of us with privileged identities need to constantly remind ourselves that everything we do, everything we think, everything we are, is painted in some way or another by our problematic identities, bloody history and all. We cannot forget this – it does a disservice to ourselves, and a disservice to those around us. We should instead strive to work with these bloody, horrible, privileged identities to help ourselves and those with those same identities see the privilege and the history, and work towards fixing the problems of the identity (see this post on being an ally for some more details). I do not believe that this can be done by pretending the blood isn’t there or pretending that somehow we can walk away from the identity – we have to get our hands dirty.

This is not something I am saying from just a philosophical angle, it is something that I have struggled with in my own life. After years of running from my privileged identities, I finally gave in to my fears to try and make peace with them – only by actively leaning in to my male, white, etc. identities was I able to identify, and root out, the ways that these identities poisoned my thinking. Coming to terms with these identities, and the effect that they have on me, greatly improved my ability to root out the problems associated with them, improving my effectiveness broadly, as well as my improving my personal mental/emotional health, and as well as my personal relationships.

In short, I think leaning in to the importance of identity, as a way of clearly thinking through how our thought-processes (and the thought-processes of those around us) are shaped by past experiences and larger socio-political-economic contexts, is necessary in order to help us root out bias from our perceptions and become more effective in the world. It’s difficult, painful, and messy, but it’s worth it.