Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: wallowinmaya 16 April 2017 04:33:20PM 1 point [-]

Here is another question that would be very interesting, IMO:

“For what value of X would you be indifferent about the choice between A) creating a utopia that lasts for one-hundred years and whose X inhabitants are all extremely happy, cultured, intelligent, fair, just, benevolent, etc. and lead rich, meaningful lives, and B) preventing one average human from being horribly tortured for one month?"

Comment author: wallowinmaya 14 April 2017 09:51:17AM *  1 point [-]

I think it's great that you're doing this survey!

I would like to suggest two possible questions about acausal thinking/superrationality:

1)

Newcomb’s problem: one box or two boxes?

  • Accept: two boxes
  • Lean toward: two boxes
  • Accept: one box
  • Lean toward: one box
  • Other

(This is the formulation used in the famous PhilPapers survey.)

2)

Would you cooperate or defect against other community members in a one-shot Prisoner’s Dilemma?

  • Definitely cooperate
  • Leaning toward: cooperate
  • Leaning toward: defect
  • Definitely defect
  • Other

I think that these questions are not only interesting in and of itself, but that they are also highly important for further research I'd like to conduct. (I can go into more detail if necessary.)

Comment author: wallowinmaya 09 April 2017 07:42:18AM *  14 points [-]

First of all, I don't think that morality is objective as I'm a proponent of moral anti-realism. That means that I don't believe that there is such a thing as "objective utility" that you could objectively measure.

But, to use your terms, I also believe that there currently exists more "disutility" than "utility" in the world. I'd formulating it this way: I think there exists more suffering (disutility, disvalue, etc.) than happiness (utility, value, etc.) in the world today. Note that this is just a consequence of my own personal values, in particular my "exchange rate" or "trade ratio" between happiness and suffering: I'm (roughly) utilitarian but I give more weight to suffering than to happiness. But this doesn't mean that there is "objectively" more disutility than utility in the world.

For example, I would not push a button that creates a city with 1000 extremely happy beings but where 10 people are being tortured. But a utilitarian with a more positive-leaning trade ratio might want to push the button because the happiness of the 1000 outweighs the suffering of the 10. Although we might disagree, neither of us is "wrong".

Similar reasoning applies with regards to the "expected value" of the future. Or to use a less confusing term: The ratio of expected happiness to suffering of the future. Crucially, this question has both an empirical as well as a normative component. The expected value (EV) of the future for a person will both depend on her normative trade ratio as well as her empirical beliefs about the future.

I want to emphasize, however, that even if one thinks that the EV of the future is negative, one should not try to destroy the world! There are many reasons for this so I'll just pick a few: First of all, it's extremely unlikely that you will succeed and will probably only cause more suffering in the process. Secondly, planetary biocide is one of the worst possible things one can do according to many value systems. I think it's extremely important to be nice to other value systems and promote cooperation among their proponents. If you attempted to implement planetary biocide you would cause distrust, probably violence and the breakdown of cooperation, which will only increase future suffering, hurting everyone in expectation.

Below, I list several more relevant essays that expand on what I've written here and which I can highly recommend. Most of these link to the Foundational Research Institute (FRI) which is not a coincidence as FRI's mission is to identify cooperative and effective strategies to reduce future suffering.

I. Regarding the empirical side of future suffering

II. On the benefits of cooperation

III. On ethics

Comment author: wallowinmaya 08 April 2017 05:18:11PM 0 points [-]

Great list!

IMO, one should add Prescriptions, Paradoxes, and Perversities to the list. Maybe to the section "Medicine, Therapy, and Human Enhancement".

[Link] Decision Theory and the Irrelevance of Impossible Outcomes

2 wallowinmaya 28 January 2017 10:16AM

[Link] Why Altruists Should Focus on Artificial Intelligence

1 wallowinmaya 16 December 2016 11:48AM
In response to Seven Apocalypses
Comment author: wallowinmaya 29 September 2016 04:30:29PM *  3 points [-]

I don't understand why you exclude risks of astronomical suffering ("hell apocalypses").

Below you claim that those risks are "Pascalian" but this seems wrong.

[Link] How the Simulation Argument Dampens Future Fanaticism

6 wallowinmaya 09 September 2016 01:17PM

Very comprehensive analysis by Brian Tomasik on whether (and to what extent) the simulation argument should change our altruistic priorities. He concludes that the possibility of ancestor simulations somewhat increases the comparative importance of short-term helping relative to focusing on shaping the "far future".

Another important takeaway: 

[...] rather than answering the question “Do I live in a simulation or not?,” a perhaps better way to think about it (in line with Stuart Armstrong's anthropic decision theory) is “Given that I’m deciding for all subjectively indistinguishable copies of myself, what fraction of my copies lives in a simulation and how many total copies are there?"

 

[Link] Suffering-focused AI safety: Why “fail-safe” measures might be particularly promising

9 wallowinmaya 21 July 2016 08:22PM

The Foundational Research Institute just published a new paper: "Suffering-focused AI safety: Why “fail-safe” measures might be our top intervention". 

It is important to consider that [AI outcomes] can go wrong to very different degrees. For value systems that place primary importance on the prevention of suffering, this aspect is crucial: the best way to avoid bad-case scenarios specifically may not be to try and get everything right. Instead, it makes sense to focus on the worst outcomes (in terms of the suffering they would contain) and on tractable methods to avert them. As others are trying to shoot for a best-case outcome (and hopefully they will succeed!), it is important that some people also take care of addressing the biggest risks. This perspective to AI safety is especially promising both because it is currently neglected and because it is easier to avoid a subset of outcomes rather than to shoot for one highly specific outcome. Finally, it is something that people with many different value systems could get behind.

Comment author: wallowinmaya 24 October 2015 09:39:46AM *  0 points [-]

Cool that you are doing this!

Is there also a facebook event?

View more: Next