Comment author: Mac 15 October 2016 01:41:14PM *  1 point [-]

Is a unit of suffering less complex than a unit of happiness, and, therefore, more likely to occur in the universe, all else equal? I realize this is an insanely difficult question, but would be interested in current opinions and any related evidence.

Submitting...

Comment author: DataPacRat 26 September 2016 01:45:17PM 3 points [-]

Music to be resurrected to?

Assume that you are going to die, and some years later, be brought back to life. You have the opportunity to request, ahead of time, some of the details of the environment you will wake up in. What criteria would you use to select those details; and which particular details would meet those criteria?

For example, you might wish a piece of music to be played that is highly unlikely to be played in your hearing in any other circumstances, and is extremely recognizable, allowing you the opportunity to start psychologically dealing with your new circumstances before you even open your eyes. Or you may just want a favourite playlist going, to help reassure you. Or you may want to try to increase the odds that a particular piece survives until then. Or you may wish to lay the foundation for a practical joke, or a really irresistible one-liner.

Make your choice!

Comment author: Mac 27 September 2016 02:02:00PM 0 points [-]

"Everything in Its Right Place" by Radiohead would capture the moment well; it's soothing yet disorienting, and a tad ominous.

Comment author: Mac 18 January 2016 12:53:31PM 4 points [-]

Interested if anyone has thoughts/research on this question:

Are chickens affected by the hedonic treadmill? If so, are they equally, more, or less susceptible to it? What about pigs?

Comment author: Mac 30 July 2016 04:03:27PM *  0 points [-]

I have yet to find good research on this. However, if anyone out there believes that farm animals are affected by the hedonic treadmill, and that farm animal suffering causes great disvalue, prioritizing donations to the Humane Slaughter Association (HSA) might be a good idea. Part of HSA's mission is to reduce the suffering of animals during slaughter, and I find it unlikely that farm animals hedonically adapt during their short and often intensely painful deaths. It seems more likely that a chicken hedonically adapts during its time in a battery cage.

Brian Tomasik has a good piece on HSA here.

Comment author: Mac 29 July 2016 02:33:31PM *  0 points [-]
  • If you are not a Boltzmann Brain, then sentience produced by evolution or simulation is likely more common than sentience produced by random quantum fluctuations.

  • If sentience produced by evolution or simulation is more common than sentience produced by random quantum fluctuations, and given an enormous universe available as simulation or alien resources, then the amount of sentient aliens or simulations is high.

  • Therefore, P(Sentient Aliens or Simulation) and P(You are a Boltzmann Brain) move in opposite directions when updated with new evidence. As SETI continues to come up empty, the possibility that you are a floating brain in space is increasingly likely, all else equal.

Are these statements logical? Criticism and suggestions welcome.

Comment author: Manfred 21 July 2016 09:48:54PM 12 points [-]

Oh my gosh, the negative utilitarians are getting into AI safety. Everyone play it cool and try not to look like you're suffering.

Comment author: Mac 22 July 2016 01:41:50PM *  6 points [-]

Foundational Research Institute promotes compromise with other value systems. See their work here, here, here, and quoted section in the OP.

Rest easy, negative utilitarians aren't coming for you.

Comment author: Gunnar_Zarncke 19 July 2016 10:47:41PM 1 point [-]

Depends on what you mean by infinite precisely. Consider for example that the in some sense finite (0-1) interval can be transformed into the interval (0-inf) via e.g x -> 1/(1-x)-1. Or whether the infinite in some sense size of the universe can be described by some finite process (like here writing a finite representation 'inf' for something infinite..

Comment author: Mac 20 July 2016 02:08:16PM 0 points [-]

Given an infinite universe, an infinite number of Mac's are stubbing their toes. Is it possible to alter/transform the universe such that only a finite number of Mac's are stubbing their toes? If so, how?

Comment author: Mac 19 July 2016 03:31:22PM 0 points [-]

Assuming the universe is infinite, is it theoretically possible to transform it into something finite?

Reason for asking: if this is possible, it would have important implications on infinite ethics and the value and potential trajectories of the far future.

Comment author: Mac 18 January 2016 12:53:31PM 4 points [-]

Interested if anyone has thoughts/research on this question:

Are chickens affected by the hedonic treadmill? If so, are they equally, more, or less susceptible to it? What about pigs?

Comment author: gjm 29 December 2015 03:43:51PM 6 points [-]

I'm puzzled by most of your links.

  • "Drifting from rationality": What's your problem with the post you link to? It seems to me it's simply pointing out that not everyone is a utilitarian, and that whether someone is a utilitarian is a matter of values as well as rationality. What's wrong with that?
  • "Closed-minded": the reaction to that post looks pretty positive to me. (And the post is pretty strange. It proposes creating rat farms filled with very happy rats as a utility-generating tool, and researching insecticides that kill insects in nicer ways.)
  • "Overly-optimistic": that post predicts a 5% chance that within 20 years the whole EA movement might be as big as the Gates Foundation. Do you really find that unreasonable?

I do agree about the fourth link -- but I don't think it's representative, and if you look at reactions on LW to the same author's posts here, you'll see that you're far from the only person who dislikes his style.

Comment author: Mac 29 December 2015 04:47:54PM -1 points [-]
  • Drifting from rationality

From the post, "This means that we need to start by spreading our values, before talking about implementation." Splitting the difficult "effective" part from the easy "altruism" part this early in the movement is troubling. The path of least resistance is tempting.

  • Closed-minded

Karma for the post is relatively low, and a lot of comments, including the top-rated, can be summarized as "Fun idea, but too crazy to even consider."

  • Overly-optimistic

The post glosses over the time value of money/charitable donations and the GWWC member quit rate, so I think it's reasonable to say that the Gates Foundation will almost definitely have moved more time value-adjusted money than that of GWWC's members over the next twenty years. Therefore, speculating that GWWC could be a "big deal" comparable to the Gates Foundation in this time frame is overly-optimistic. Still disagree? Let's settle on a discount rate and structure a hypothetical bet, I'll give you better than 20-1 odds.

  • Self-congratulatory

I don't actually believe this is a big problem in itself, but if the other problems exist it seems like this would exacerbate them.

Comment author: Mac 29 December 2015 01:49:41PM *  2 points [-]

FWIW, I am “meh” on EA right now, and I suspect other LW’ers are on the fence as well. After spending some time on the Effective Altruism Forum, here are some worrying trends I’ve seen in the EA movement.

Drifting from rationality (this post), Closed-minded (reaction to this post), Overly-optimistic (this post), Self-congratulatory (this post)

I am especially disappointed that EA seems to be loosening from its rationalist roots so early in its development.

Maybe I am too demanding; any group will occasionally show flaws and the Effective Altruism Forum may not be representative of the entire EA movement. Nevertheless, I am tipping toward pessimism.

I will continue to search for and donate to effective charities, but I am wary to promote myself as part of the current EA movement, or donate to organizations like EAO, due to my concerns. I think other LW’ers have similar reservations.

View more: Next