Podcast: Play in new window | Download
Subscribe: RSS
[content warning: suicide, human extinction. Might be a bad idea to listen to if you suffer suicidal tendencies.]
We speak to Jason, who would take extreme measures to end all suffering. However, lacking extreme powers, he recommends doing good as efficiently as possible, and offers these resources:
Givewell – researches charities for effectiveness.
80,000 hours – Have a greater social impact with your career.
He also provided us with a link to the World Happiness Report
Other things mentioned in this episode:
Jeremy Bentham – founder of utilitarianism. Steven was right – he was one of the earliest advocates of animal rights from the very beginning.
Utility Monsters (also in comic form)
There are a few responses to Utility Monsterism, here’s a link with some quick lay-person summaries. But my favorite is PhilGoetz, who argues that humans are utility monsters, so it’s kinda a non-issue – we’ve all decided we’re cool with Utility Monsterism.
VHEMT – the voluntary human extinction movement
Those Who Walk Away From Omelas (pdf) – short story by Ursula LeGuin
The Transdimensional Justice Monster
Scott Alexander’s retelling of the final act of Job (and sadly, according to comments there, he wasn’t the first to think of it – a Christian philosopher beat him to it by a few years. Still, Scott’s version is FAR more entertaining!)
The Hedonic Treadmill – Not that Treadmilly? (Ctrl-F for “Hedonic Adaptation” to get to the relevant part in the linked page)
Turns out, the idea that people adapt to negative events and return to their previous set-point is a myth. Although the exact effects vary depending on the bad event, disability, divorce, loss of a partner, and unemployment all have long-term negative effects on happiness. Unemployment is the one event that can be undone relatively easily, but the effects persist even after people become reemployed. I’m only citing four studies here, but a meta analysis of the literature shows that the results are robust across existing studies.
The same thing applies to positive events. While it’s “common knowledge” that winning the lottery doesn’t make people happier, it turns out that isn’t true, either.
Peter Singer’s book “Famine, Affluence, and Morality“, and his essay “All Animals Are Equal“(pdf)
Also very famous among Rationalists is Singer’s book “The Life You Can Save”
80,000 Hours states that being a Tobacco CEO causes so much harm that it cannot be countered by donating all earnings to altruism, no matter how effective.
Larks of Effective-Altruism.com argues that maaaaaaaybe it could be?
Either way, the best course of action is obvious – become a Tobacco CEO and do a really shitty job.
Suffering is Valueless (most relevant section quoted below)
I believe moral value is inherent in those systems and entities that we describe as ‘fascinating’, ‘richly structured’ and ‘beautiful’. A snappy way of characterising this view is “value-as-profundity”. On the other hand, I regard pain and pleasure as having no value at all in themselves.
In the context of interpersonal affairs, then, to do good is ultimately to make the people around you more profound, more interesting, more beautiful – their happiness is irrelevant. To do evil, on the other hand, is to damage and degrade something, shutting down its higher features, closing off its possibilities.
While we’re on the topic, why the heck is it that generally the happier a place is, the higher the depression and suicide rate? Lots of guesses, nothing sounds definitive.
Note from SSC – Effective Altruists aren’t actually as mentally ill as you may think
LessWrong post on being Adaptation Executors
A fictional portrayal of Neil Armstrong on Orgasmium (which serves as Eneasz’s example of why such a world would be awful)
We ended with a joke about Coherent Extrapolated Volition
I don’t know how effective these are, but because we’re not monsters – this is a suicide helpline. 1 (800) 273-8255. Please call if you’re feeling suicidal. Or text “GO” to 741-741
I feel that the podcast is not taking full advantage of it’s potential due to the quality sound. Regardless I enjoy the content and your personalties. I just hope your production quality improves over time.
Thanks, Thomas! Know any generous sound engineers in the Denver area? ;)
I am testing different options for better sound recording. Fingers crossed!
Hi folks,
Just wanted to say, keep it up, I like listening to you! I agree with Thomas about the sound quality, though. The HPMOR podcast had a clean sound; what did Eneasz do with it that you are not doing now?
Jason, the guy who wants to kill all humans in this episode left me bemused. Surely humans have agency and can choose for themselves if they live or die? It seems that the 7 billion that are alive now, or at least those of the 7 billion who are adults, have chosen to live for now. I am sure some of them are enjoying life without causing too much suffering. Why should they be condemned?
There are more parameters than suffering and happiness. Personally, I would take or inflict some suffering and forgo some happiness to preserve my agency.
Furthermore, I think there might be less suffering per person now than ever before. In the developed countries, at least, but the rest of the world will be catching up in the next decades. Humanity has lived through so much pain, disease, ignorance, darkness, violence, and now that we are on the brink of conquering all those it would just feel like such a waste to just kill everybody.
Hey Emma! Thanks for the feedback! The main thing we’re doing differently than Eneasz is we’re recording in a larger studio that has room for slightly more echo. Eneasz’s setup was basically an ideal one-person recording station. I’m continuing to refine my skills as an editor though, so don’t hesitate to call me out on any issues. 🙂
I agree with your view. 🙂
To basically just confirm what Steven said – my HPMoR set up was me sitting in a very small area with TONS of padding around me to absorb all sound. While we’re recording in a smallish area with some padding, we do like to be able to look at each other as we’re talking, which would not be possible with my HPMoR setup. We’re still working out a few kinks, but unless we actually get a sound-studio (which are engineered to nullify nearly all sound aside from what’s going directly into the microphone) it’ll never be as good as HPMoR was. And we don’t have the budget to create/rent a sound studio. However if you do know of someone in the Denver area that can help us with some optimizing advice based on what we have to work with, we would greatly appreciate it!
First of all I’d like to thank you guys for this podcast. I’m a big fan of hpmor, and I especially loved Eneasz’s podcast. There’s basically no rationalist community where I live, and I have no group of friends liked you guys. I wish we could hang out so listening is the next best thing.
However not being able to interject on some of your triple nested digressions is basically nerd torture. One instance I can remember from this episode is about the Orgasmium. My current opinion is that tilling the universe with 100% efficient computer simulations of sentient beings running on Orgasmium is the best possible state for it. It looks like I disagree with you slightly Eneasz’s 🙂 In a way, it’s like the perfect opposite of the worst possible suffering for everyone (+INF and -INF if you will).
If Orgasmium for everyone is not what you would consider +INF, Eneasz, then think about what you believe does +INF, then reassign it to what the effect of Orgasmium is. Would you take it then?
We might not know what happens when we take Orgasmium, but what ever it is, it’s +INF value, and you will love it. If you won’t love it, then it’s not really Orgasmium.
Perhaps a good topic for continuation would be the conclusion of Three Worlds Collide. (Which I listened with your podcast, thanks a lot!). I actually disagree with Elizier’s conclusion and think that modifying Humans to become Super Happies is a good thing (and probably a moral imperative) because I fail to see the inherent moral value in the human race existing in a vacuum. We could be Super Happies instead! (But I wouldn’t press the button like Jason, that’s horrifying!)
Feel free to update “we have one or more regular listeners” to 99% by the way. I’m hooked.
Hi Jules! We are very happy to hear that, so you’ve made our universe closer to optimum. 🙂
I’ve had to answer your objection a few times on several different platforms now, so I almost have a stock answer at this point. 🙂 Orgasmium is, as far as I see it, another version of wireheading. It forces a brain into a state that is identical to maximum happiness. The problem is, I don’t place much value on the mix of chemicals and stimulation in a brain that is disconnected from the greater reality. I place quite a bit of value on what the state of reality is. Happiness is valuable in that is signals that the state of reality is to our liking. When the signal is divorced from the reality, it stops being valuable. Happiness loses meaning if it’s artificially imposed.
In general I primarily care about other complex minds – mainly in interacting with them and gaining their esteem. That isn’t possible via Orgasmium. If your suggest tiling the universe in a simulation (or many simulations) where I could interact with lots of people of similar mental functioning and participate in a community with them, and we knew we were in a simulation, that would probably be OK with me. Particularly if we could still affect the “real” world if we needed/wanted to. There are lots of types of “struggle” and “accomplishment” in such a scenario that I would find valuable. But they would also have disappointment and some suffering. I think at this point we’ve gotten so far away from Orgasmium and Wireheading that to use those terms in relation to what we’re talking about is as bad as referring to Genghis Khan as a Libertarian. Both ping out “government” mental nodes, but have nothing else in common.
I also love Three Worlds Collide. 🙂 While I do think the super-happies are crippled compared to 3WC humans, at least they still interacted with the real world and real people. I wouldn’t wipe them out any more than I would wipe out a race of very happy puppies. I think it would be a good idea to have an episode on wireheading, and maybe at the same time an episode on the value (or lack thereof) of mental complexity. I value mental complexity over happiness. A race that made that trade fully, exchanging all mental complexity for unparalleled bliss (see again kabbalistic!NeilArmstrong ) would have zero value (in my view).
So I think it’s possible we just have different terminal values? If you wanted to be on Orgasmium forever, well, I would consider that a loss. 🙁 But just like I find it very hard to prohibit a suicide if it’s a reasoned decision that an agent has come to, I would find it very hard to prohibit becoming an Orgasmist.
I already posted this in the subreddit, but I think the post should be here as well! Eneasz has already answered me (see his response to Jules in this comment thread) although far from convincingly. Okay, here it goes:
Fearing that my anger would overtake any appearance of objectivity, I asked to not be involved in this episode. Originally, I wasn’t going to listen to it either, but I’m happy to have changed my mind. Mostly because it was really entertaining! You guys don’t need me after all.
To both Eneasz and Jason who, in a way, turned out to be different sides of the same coin: Life is A.) mutable and B.) complex. You can WORK with life. Jason, the point was already well made that many aspects of human well-being are improving. For example, even with rapid human population growth, the number of people facing food insecurity is decreasing (http://www.fao.org/news/story/en/item/288229/icode/). Eneasz, the orgasmium creatures of low consciousness and high pleasure have all of their existence ahead of them. Who knows what the future could bring? The answer, in both cases, is NOTHING if they are completely destroyed (a fate Eneasz said that he was equally as willing to impose on orgasmium). Both of you, are you so confident that you know the inner workings/interactions/whole experience of a creature, and the universe of possible futures for them, that you would limit them to a single future of non-existence?
Okay, onto points that are more specific to parts of the podcast.
Regarding extinguishing life as it exists now, here are some points that Steven and Eneasz didn’t bring up, but Jason has heard from me:
1.) First, this is an example of why a general AI MUST NOT be programmed or designed by someone like Jason. Unfortunately, there are others who think like Jason.
2.) Jason talked about going to the extremes to test the soundness of a theory or equation. In that case, shouldn’t the result “yes, press a button that plunges earth into the sun” give one pause about the soundness of one’s philosophy?
3.) The reasoning “It’s the only thing everyone can agree on” for using pain and suffering as the sole factor in decision-making is so poor as to be nearly dismissible. EVEN IF the statement wasn’t false. My co-hosts are correct, most people agree that good things are good, too.
4.) Jason said, “Pain and suffering of being tortured approaches negative infinity,” and I call. Please defend that statement.
5.) Why isn’t anyone bringing up the strong preference of lifeforms to continue doing things associated with being alive?!
——————————————–
Now for points specific to Eneasz:
1.) Eneasz asked, how is one maximally happy person different from another maximally happy person? The answer is: Your different experiences, your different preferences, etc. (Note: Eneasz clarified that he was talking about a very specific organism of limited consciousness and maximum pleasure that apparently is quite similar one to the next. My response is: even if they start out very similar, organisms become different over time as they experience different stimuli in the world.)
2.) That said, conformity to a desirable level of being doesn’t seem that terrible to me. With perfect information, perfect rationalists should reach the same conclusions, right? What if (thought experiment) you could have complete access to all the information held by others and could make yourself close to perfectly rational? Your own experiences and feelings would be dwarfed compared to the experiences and feelings of everyone, and you would start to become identical to other perfect info/perfect rationalist beings. (Note: Some people pointed out that I’m talking about a single being by the end. That’s the point, right?)
Katrina, well that is such a positive, sort of Hermione HPMOR thing to say to tell Harry
‘I told you so’.
I agree with Eneasz on the point of orgasmium, but that doesn’t mean I wouldn’t rather think as you do, if I could and if I were a different person.
Rather I would consider myself a less wise version of Godric Gryffindor to your Fred and George. Orgasmium is smthng I would never ever chose for myself but maybe for my heir of Gryffindor.
If I were thinking of what I would want for my hypothethical children, that s when my priorities about life in general change drastically.
If there is no afterlife after the pushing the button thing (if there is an afterlife i imagine it as a place where you always have some kind of a changing interesting purpose that fulfills you, not a christian orgasmic afterlife), i think i would rather my hypothetical kids take the orgasmium version of reality that Katrina proposes,
although, of all the possibilities, I vote for the 6 to 10 happiness version by Steven and Eneasz.
p.s. maybe everyone will get to chose for themselves in the end,only themselves and noone else (not voting as a democracy). that would be perfect, to not decide anyone else s fate, only your own – i can t imagine how that could possibly work, but…
The opposite of depression is a diagnosable mental disorder, it is called mania and like its counterpart it looks very different in real life than it does in movies. One would think a life free of sadness would be great but the fact is that it trends into risky behavior, instability, inability to maintain normal relationships, and violence.
That’s true. We had a discussion on the ramifications of lifelong mania, but it got way off track and didn’t make it to the final episode.
We speculated that there may be some people who’s average happiness might hover between 9-10, but with none of the major drawbacks that come with mania. People with such minds are, I think, certainly possible.
In my experience a 9-10 level of happiness is not that valuable. After trying dozens of different drugs I found that the most important experiences were painful and disturbing rather than orgasmic. (dmt+salvia >>> 2-cb+mdma) I do believe that meaning can still be achieved while one is in blissful state, but I don’t think that happiness is that important of a goal rather than meaning and the journey towards it.
How is virtue ethics and/or stoicism regarded in the rationalist world ?
Depends on who you ask, but they are not viewed unfavorably. There is a general consensus that while utilitarian consequentialism is the gold standard, that is not a type of morality that can be successfully executed by humans (due to both failings of knowledge, and running on corrupted hardware). More rules-based schemes (but with well-thought out foundational rules), such as virtue ethics, are considered safe-for-humans.
Pingback: More on 5th Season – Death Is Bad
Jason: Torture is not -INF. See http://lesswrong.com/lw/kn/torture_vs_dust_specks/ I hope (and think) you’re wrong, not evil.
Actually, the torture vs. dust speck thought experiment could support the thesis that torture is -INF. I for one would prefer an almost-infinite number of people suffering the dust speck instead of one person being tortured.
I’m a bit surprised by this episode, because its premise stikes me as what rationality is not and what philosophy frequently devolves into: a thought experiment that is unrealistic both in its setting and in the way it is discussed, in the sense that realistic human psyhcology and cognitive biaises don’t seem to be really taken into consideration.
For example, although Jason might be sincere in his thought process, I really doubt he would really push the button if given the chance. At least, even with his intellectual background, I think there is a pretty low probability that he would actually push the button, knowing typical human psychology.
You don’t know Jason. You should take his word on this one.
Well, coming from you, I guess that warrants a pretty big update of my perceived probability of Jason pushing the button. Still, extraordinary claims require extraordinary evidence, so I would have trouble believing it’s above 30% (coming from around 3% beforehand).
I didn’t get Jason’s full name, has he publications online that I could dig into?
Pingback: Not Everything Is A Clue – Ch 150-152 – The Methods of Rationality Podcast
Pingback: How They/Them Hurts - Death Is Bad
Pingback: 211 – The Social Justice Religion with Tracing Woodgrains | The Bayesian Conspiracy