Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Kaj_Sotala 18 September 2017 10:53:31AM *  2 points [-]

I met Denkenberger at the same ALLFED workshop that Hanson participated in (as a part of the GoCAS research program on existential risk); I also thought his work was quite impressive and important.

Comment author: ignoranceprior 18 September 2017 05:38:41PM 0 points [-]

I thought you were a negative utilitarian, in which case disaster recovery seems plausibly net-negative. Am I wrong about your values?

Comment author: tukabel 10 September 2017 09:01:17PM 0 points [-]

Oh boy, really? Suffering? Wait till some neomarxist SJWs discover this and they will show you who's THE expert on suffering... especially in indentifying who could be susceptible to persuading they are victims (and why not some superintelligent virtual agents?).

Maybe someone could write a piece on SS (SocialistSuperintelligence). Possibilities are endless for superintelligent parasites, victimizators, guilt throwers, equal whateverizators, even new genders and races can be invented to have goals to fight for.

Comment author: ignoranceprior 11 September 2017 01:57:27AM 6 points [-]

Could you please try to keep discussion on topic and avoid making everything about politics? Your comment does not contribute to the discussion in any way.

In response to comment by Elo on Is Feedback Suffering?
Comment author: fortyeridania 11 September 2017 01:46:59AM 1 point [-]

I know this is Betteridge's law of headlines, but do you happen to know if it's accurate?

Comment author: ignoranceprior 11 September 2017 01:50:57AM 2 points [-]

According to this study, the law appears to be inaccurate for academic articles.

Comment author: Dagon 30 August 2017 02:43:38PM 1 point [-]

This has already happened, you're reliving your 498,776th life and will be asked again in a few weeks (and you'll choose 2 again, because you did the first time).

However, as the saying goes "past performance does not necessarily predict future results". Whether your past is worth re-living and whether your unknown future is worth living are two very different questions, which could easily have different answers.

And finally, everyone who answers (1), can you identify the point when your past turned from nonnegative to negative? If not, you probably have a skewed memory and the sum of your experiences at those points in time is probably higher value than your aggregated memory at this point in time.

Comment author: ignoranceprior 30 August 2017 08:03:51PM *  1 point [-]

And finally, everyone who answers (1), can you identify the point when your past turned from nonnegative to negative? If not, you probably have a skewed memory and the sum of your experiences at those points in time is probably higher value than your aggregated memory at this point in time.

Personally, I've always had very high levels of anxiety and neuroticism, and a lack of social enjoyment due to social anxiety/autism or other happiness to make up for it, so I'm not sure that even my early childhood was positive (I'm 18 now). But I can definitely pinpoint a time where I contracted other medical issues and am fairly confident my life after that has been more negative than the time before it was positive (assuming it even was positive).

Also, you could flip this the other way: "everyone who answers (2), can you identify the point when your past turned from nonpositive to positive? If not, you probably have a skewed memory and the sum of your experiences at those points in time is probably lower value than your aggregated memory at this point in time." It's good to avoid pessimism bias, but let's not fall prey to Pollyanna/optimism bias either.

Comment author: ignoranceprior 30 August 2017 01:31:28PM *  3 points [-]

Probably in the minority here, but I'd choose not to relive my life. I don't think my life is worth living. Partly because I have a lot of medical issues which cause significant suffering, and partly because the strongest intensities of suffering I have experienced are much worse than the strongest intensities of happiness are good. However, I do think it's plausible that most lives in the developed world are worth living.

(I am implicitly using a classical utilitarian definition of life worth living.)

Comment author: Pablo_Stafforini 12 August 2014 07:53:37PM *  0 points [-]

Here's the link (both links above are dead).

Comment author: ignoranceprior 27 August 2017 12:06:42AM *  1 point [-]

Here's the latest working link (all three above are dead)

Also, here's an archive in case that one ever breaks!

Comment author: Lumifer 11 July 2017 06:19:53PM *  1 point [-]

a plausible scenario ... [vs] ... very low prior probabilities

Aren't we talking about picking which absurd ideas to engage with?

You are doing some motte and bailey juggling:

Motte: This is an absurd idea which we engage with because it's worth engaging with absurd ideas.

Bailey: This is an important plausible scenario which we need to be concerned about.

Comment author: ignoranceprior 11 July 2017 06:30:47PM *  0 points [-]

I believe I already told you that I don't consider "spreading wild animal suffering" to be absurd; it's a plausible scenario. What may be intuitively absurd is the claim that "destroying nature is a good thing" -- which is not necessarily the same as the claim that "spreading wild animal suffering to new realms is bad, or ought to be minimized". (And there are possible interventions to reduce non-human suffering conditional on spreading non-human life. E.g. "value spreading" is often discussed in the EA community.)

Anyway, I'm done with this conversation for now as I believe other activities have higher EV.

Comment author: Lumifer 11 July 2017 06:03:13PM 2 points [-]

Ideally, you would prioritize exploring ideas that are decision-relevant and where further research has high Value of Information.

And does the exploration of the consequences of spreading non-human life throughout the galaxy qualify? Doesn't look like that to me, seems like you'll be better off figuring out whether living on intersections of ley lines is beneficial, or maybe whether ghosts have many secrets to tell you...

Comment author: ignoranceprior 11 July 2017 06:09:29PM *  1 point [-]

Yes, I think it does because it's a plausible scenario and most plausible (IMO) ethical views say that causing non-human suffering is bad. Further exploration of the probability of such scenarios could influence my EA cause priorities, donation targets, and/or general worldview of the future.

seems like you'll be better off figuring out whether living on intersections of ley lines is beneficial, or maybe whether ghosts have many secrets to tell you...

Those have very low prior probabilities and low decision-relevance to me.

Comment author: Lumifer 11 July 2017 05:39:55PM 2 points [-]

I don't see much in the way of empirical claims here (these would require a hard definition of "suffering" and falsifiability to start with), so I guess I'm talking about counterintuitive normative claims.

I think the idea that we should reduce the chance of spreading extreme involuntary suffering throughout the universe is much less counterintuitive

The claim is a bit different: that we should not spread (non-human) life through the galaxy. This is counterintuitive.

we should probably spend significant time engaging with ideas that seem intuitively absurd

So how do you pick absurd ideas to engage with? There are a LOT of them.

Comment author: ignoranceprior 11 July 2017 05:57:23PM *  1 point [-]

I don't see much in the way of empirical claims here (these would require a hard definition of "suffering" and falsifiability to start with), so I guess I'm talking about counterintuitive normative claims.

Fair point. This is one problem I have had with moral realist utilitarianism. Although I think it may still be the case that sentience and suffering are objective, just not (currently) measurable. Regardless, I don't think the claim of net-suffering in nature is all that absurd.

The claim is a bit different: that we should not spread (non-human) life through the galaxy. This is counterintuitive.

The claim I made is that spreading non-human life throughout the galaxy constitutes an s-risk, i.e. it could drastically increase the total amount of suffering. Any plausible moral view would say that s-risks are generally bad things, but it is not necessarily the case that suffering can never be outweighed by positive value. E.g., if one is not something like a negative utilitarian, then it could still be permissible to spread non-human life throughout the galaxy, as long as you take action to ensure that the benefits outweigh the harms, however you want to define that. Perhaps genetically altering them to reduce infant mortality rates, or to reduce their capacity to experience suffering, having a singleton to prevent suffering re-emerging through Darwinian processes, etc.

So how do you pick absurd ideas to engage with? There are a LOT of them.

This is a hard problem in practice, and I don't claim to know the solution. Ideally, you would prioritize exploring ideas that are decision-relevant and where further research has high Value of Information. Then you would probably transition from an exploration stage to an exploitation state (see the "multi-armed bandit").

Comment author: Lumifer 11 July 2017 05:21:47PM 1 point [-]

Well then, how much resources (e.g. time and mental energy) do you feel should we spend entertaining absurd (note: no quotes) notions?

Comment author: ignoranceprior 11 July 2017 05:32:10PM *  2 points [-]

Are you referring to empirical or normative claims? I don't consider the idea that wild animals experience net suffering absurd, although the idea that habitat destruction is morally beneficial is counterintuitive to most people. I think the idea that we should reduce the chance of spreading extreme involuntary suffering, including wild-animal suffering, throughout the universe is much less counterintuitive, and is consistent with a wide range of moral views.

Since I give significant (but not 100%) weight to "the overwhelming importance of the far future" (Nick Beckstead), and the future is always absurd, we should probably spend significant time engaging with ideas that seem intuitively absurd. I don't think opposition to spreading wild-animal suffering is one of these, although things like suffering subroutines and some of the ideas mentioned in the OP (e.g., quantum immortality, multiverses) might be. Some people consider the intelligence explosion absurd, but I still think it has some non-negligible plausibility.

View more: Next