Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: entirelyuseless 11 July 2017 02:11:59PM 0 points [-]

"I suspect it’s a type error to think of an ontology as correct or wrong."

Indeed. I mentioned that recently.

"This isn’t a challenge to reductionism."

Try harder, and you can make it into a pretty good one.

Comment author: RomeoStevens 12 July 2017 06:22:21PM 0 points [-]

Lossy compression isn't telos free though.

Comment author: RomeoStevens 12 July 2017 06:21:00PM 2 points [-]

You can play with this right now and simultaneously dissolve some negative judgements. Think about the function of psychics/fortune tellers in poor communities. What do you think is going on there phenomenologically when you turn off your epistemic rigor goggles? Also try it with prayer. What might you conclude about prayer if you were a detached alien? Confession is a pretty interesting one too. What game theoretic purpose might it be serving in a community of 150 people? I've found these types of exercises pretty valuable. Especially the less condescending I manage to be.

Comment author: paulfchristiano 20 June 2017 04:11:08PM *  19 points [-]

I don't buy the "million times worse," at least not if we talk about the relevant E(s-risk moral value) / E(x-risk moral value) rather than the irrelevant E(s-risk moral value / x-risk moral value). See this post by Carl and this post by Brian. I think that responsible use of moral uncertainty will tend to push you away from this kind of fanatical view

I agree that if you are million-to-1 then you should be predominantly concerned with s-risk, I think they are somewhat improbable/intractable but not that improbable+intractable. I'd guess the probability is ~100x lower, and the available object-level interventions are perhaps 10x less effective. The particular scenarios discussed here seem unlikely to lead to optimized suffering, only "conflict" and "???" really make any sense to me. Even on the negative utilitarian view, it seems like you shouldn't care about anything other than optimized suffering.

The best object-level intervention I can think of is reducing our civilization's expected vulnerability to extortion, which seems poorly-leveraged relative to alignment because it is much less time-sensitive (unless we fail at alignment and so end up committing to a particular and probably mistaken decision-theoretic perspective). From the perspective of s-riskers, it's possible that spreading strong emotional commitments to extortion-resistance (e.g. along the lines of UDT or this heuristic) looks somewhat better than spreading concern for suffering.

The meta-level intervention of "think about s-risk and understand it better / look for new interventions" seems much more attractive than any object-level interventions we yet know, and probably worth investing some resources in even if you take a more normal suffering vs. pleasure tradeoff. If this is the best intervention and is much more likely to be implemented by people who endorse suffering-focused ethical views, it may be the strongest incentive to spread suffering-focused views. I think that higher adoption of suffering-focused views is relatively bad for people with a more traditional suffering vs. pleasure tradeoff, so this is something I'd like to avoid (especially given that suffering-focused ethics seems to somehow be connected with distrust of philosophical deliberation). Ironically, that gives some extra reason for conventional EAs to think about s-risk, so that the suffering-focused EAs have less incentive to focus on value-spreading. This also seems like an attractive compromise more broadly: we all spend a bit of time thinking about s-risk reduction and taking the low-hanging fruit, and suffering-focused EAs do less stuff that tends to lead to the destruction of the world. (Though here the non-s-riskers should also err on the side of extortion-resistance, e.g. trading with the position of rational non-extorting s-riskers rather than whatever views/plans the s-riskers happen to have.)

An obvious first question is whether the existence of suffering-hating civilizations on balance increases s-risk (mostly by introducing game-theoretic incentives) or decreases s-risk (by exerting their influence to prevent suffering, esp. via acausal trade). If the former, then x-risk and s-risk reduction may end up being aligned. If the latter, then at best the s-riskers are indifferent to survival and need to resort to more speculative interventions. Interestingly, in this case it may also be counterproductive for s-riskers to expand their influence or acquire resources. My guess is that mature suffering-hating civilizations reduce s-risk, since immature suffering-hating civilizations probably provide a significant part of the game-theoretic incentive yet have almost no influence, and sane suffering-hating civilizations will provide minimal additional incentives to create suffering. But I haven't thought about this issue very much.

Comment author: RomeoStevens 22 June 2017 02:26:09AM 2 points [-]

and suffering-focused EAs do less stuff that tends to lead to the destruction of the world.

In support of this, my system 1 reports that if it sees more intelligent people taking S-risk seriously it is less likely to nuke the planet if it gets the chance. (I'm not sure I endorse nuking the planet, just reporting emotional reaction).

Comment author: cousin_it 20 June 2017 01:46:47PM *  10 points [-]

Wow!

Many thanks for posting that link. It's clearly the most important thing I've read on LW in a long time, I'd upvote it ten times if I could.

It seems like an s-risk outcome (even one that keeps some people happy) could be more than a million times worse than an x-risk outcome, while not being a million times more improbable, so focusing on s-risks is correct. The argument wasn't as clear to me before. Does anyone have good counterarguments? Why shouldn't we all focus on s-risk from now on?

(Unsong had a plot point where Peter Singer declared that the most important task for effective altruists was to destroy Hell. Big props to Scott for seeing it before the rest of us.)

Comment author: RomeoStevens 22 June 2017 02:23:09AM *  1 point [-]

X-risk is still plausibly worse in that we need to survive to reach as much of the universe as possible and eliminate suffering in other places.

Edit: Brian talks about this here: https://foundational-research.org/risks-of-astronomical-future-suffering/#Spread_of_wild_animals-2

Comment author: RomeoStevens 22 June 2017 02:21:41AM 3 points [-]

Related: perverse ontological lock-in. Building things on top of ontological categories tends to cement them since we think we need them to continue getting value from the thing. But if the folk ontology doesn't carve reality at the joints there will be friction present in all the stories/predictions/expectations built up out of those ontological pieces along with an unwillingness to drop the folk ontology on the belief that you will lose all the value of the things you've built on top. One model of the punctuated equilibrium model of psychological development is periodic rebasing operations.

Comment author: tristanm 17 June 2017 03:19:08PM *  6 points [-]

I have a few thoughts about this.

First I believe that there is always likely to be a much higher ratio of critique than content creation going on. This is not a problem in and of itself. But as has been mentioned and which motivated my post on the norm one principle, heavy amounts of negative feedback are likely to discourage content creation. If the incentives to produce content are outweighed by the likelihood that there will be punishments for bad contributions, then there will be very little productive activity going on, and we will be filtering out not just noise but also potentially useful stuff as well. So I am still heavily for establishing norms that regulate this kind of thing.

Secondly it seems that they very best content creators spend some time writing and making information freely available, detailing their goals and so on, and then eventually go off to pursue those goals more concretely, and the content creation on the site goes down. This is sort of what happened with the original creators of this site. This is not something to prevent, simply something we should expect to happen periodically. Ideally we would like people to still engage with each other even if primary content producers leave.

It's hard to figure out what the "consensus" is on specific ideas, or whether or not they should be pursued or discussed further, or whether people even care about them still. Currently the way content is produced is more like a stream of consciousness of the community as a whole. It goes in somewhat random directions, and it's hard to predict where people will want to go with their ideas or when engagement will suddenly stop. I would like some way of knowing what the top most important issues are and who is currently thinking about them, so I know who to talk to if I have ideas.

This is related to my earlier point about content creators leaving. We only occasionally get filtered down information about what they are working on. If I wanted to help them, I don't know who to contact about that, or what the proper protocols are about trying to become involved in those projects. I think the standard way these projects happen is a handful of people who are really interested simply start working on it, but they are essentially radio silent until they get to a point where they are either finished or feel they can't proceed further. This seems less than ideal to me.

A lot of these problems seem difficult to me, and so far my suggestions have mostly been around discourse norms. But again this is why we need more engagement. Speak up, and even if your ideas suck, I'll try to be nice and help you improve on them.

By the way, I think it's important to mention that even asking questions is actually really helpful. I can't count the number of times someone has asked me to clarify a point I made about something, and in the process of clarifying, I actually discovered some new issues or important details that I had previously missed, and it caused me to update because of that. So even if you don't think you can offer much insight, even just asking about things can be helpful, and you shouldn't feel discouraged about doing this.

Comment author: RomeoStevens 20 June 2017 02:26:41AM 1 point [-]

Agree about creation:critique ratio. Generativity/creativity training is the rationalist communities' current bottleneck IMO.

Comment author: Pimgd 19 June 2017 10:27:21AM *  1 point [-]

One piece of obvious advice I've heard a lot is that you should exercise more.

I have a lot of ... probably weak ... counterarguments to this. They seem to be rationalizations; e.g. "I don't want to do this because ...".

For example, I'll list a few.

  • Why should I exercise if I'm already at a good weight?
  • Why should I exercise if my daily life (programming) does not require significant physical skill?
  • Why should I exercise if I already go on a short (15 min) daily walk - is more really needed?
  • I don't want to feel tired, so exercising doesn't feel rewarding to me at all
  • Exercising takes up time, I'd rather not spend this time exercising
  • If you live a longer life because of exercising, how do you know you're not running a red queen's race (you have to stay fit lest you get a heart attack 6 months later because it's old and you die anyway)

Rather than looking for cutting edge ideas to be more productive, I'm rather looking for a cutting edge idea as to why obvious advice would work / be given.

Possibly I should make a reddit account and post on changemyview or something. I just don't see why I should exercise at the moment given that I have the weight I want and the fitness to do what I need to do and don't have any health issues related to fitness (dental issues, but that's a separate point and due to a filling that seems have been placed improperly).

Then again, I sometimes feel as if I'm one-eyed, saying "I understand how having two eyes would be better, but is it really necessary? Operating is hard, it costs money, it takes time, I'd have to go to the hospital, it'd be a huge thing, and I can already see right now, so I don't see why you'd want two eyes. Yeah, okay, the redundancy would be nice, that you're not blinded if your one eye gets dirty or develops issues, but is all the hassle really worth a second eye?" And I'd feel that the answer that would convince me is actually seeing out of two eyes and realizing that hey, you can sort of see in 3D now and estimate distance and you get depth perception and a wider field of vision and it's easier to read or skim text and blah blah blah blah - but you wouldn't know that, because you only have one eye.

What's the two-eyed benefit of exercising?

Comment author: RomeoStevens 20 June 2017 02:24:30AM 1 point [-]

Meta: if something has tons of evidence and you can't bring yourself to try it for a month ask yourself TDT-wise what your life looks like with and without skill of 'try seemingly good ideas for a month.'

Comment author: Viliam 13 June 2017 11:29:54PM *  7 points [-]

Reading the Sequences made me feel, on the gut level, things like: "reality already exists, and your clever words are not going to change it retroactively". After reading the Sequences, most of the online debates, including many that previously seemed interesting and I was participating at them, now feel like watching retarded people doing the same elementary mistakes over and over again. Before this, I didn't fully realise how much even the typical smart person is incapable of distinguishing between the map and the territory (i.e. their own thoughts and social consensus vs. the actual reality). Now it seems like people try to magically change reality by yelling at it loudly enough; and the smart ones keep doing the same thing as the stupid ones, only yelling more sophisicated words.

Comment author: RomeoStevens 15 June 2017 07:17:52AM 1 point [-]

Babbler reality has a strong pull because it doles out tasty treats.

Comment author: tristanm 14 June 2017 09:45:34PM 1 point [-]

The way that I choose to evaluate my overall experience is generally through the perception of my own feelings. Therefore, I assume this simulated world will be evaluated in a similar way: I perceive the various occurrences within it and rate them according to my preferences. I assume the AI will receive this information and be able to update the simulated world accordingly. The main difference then, appears to be that the AI will not have access to my nervous system, if my avatar is being represented in this world and that is all the AI has access to, which would prevent it from wire-heading by simply manipulating my brain however it wants. Likewise it would not have access to its own internal hardware or be able to model it (since that would require knowledge of actual physics). It could in theory be able to interact with buttons and knobs in the simulated world that were connected to its hardware in the real world.

I think this is basically the correct approach and it actually is being considered by AI researchers (take Paul's recent paper for example, human yes-or-no feedback on actions in a simulated environment). The main difficulty then becomes domain transfer, when the AI is "released" into the physical world - it now has access to both its own hardware and human "hardware", and I don't see how to predict its actions once it learns these additional facts. I don't think we have much theory for what happens then, but the approach is probably very suitable for narrow AI and for training robots that will eventually take actions in the real world.

Comment author: RomeoStevens 15 June 2017 07:13:26AM 0 points [-]

It does have access to your nervous system since your nervous system can be rewired via backdriving inputs from your perceptions.

Comment author: RainbowSpacedancer 02 June 2017 04:42:43PM *  0 points [-]

Books on leadership. The psychology + social dynamics of leadership and the traits of successful leaders. There are so many books I don't know where to start.

Comment author: RomeoStevens 04 June 2017 10:34:37PM *  1 point [-]

Olivia Cabane's books are where I'd start. Then Kegan's Immunity to Change.

View more: Next