Recent Comments

Well, fuck.

by siIver on The curse of identity | 0 points

As is, every level is only useful insofar as it helps with lower levels. But Level 1 still isn't the ultimate goal. You don't live to do the dishes, and not – at least not necessarily – to work. I think this model should be extended by Level 0 actions, which are things that directly cause happiness (or, alternatively, whatever else your ultimate goal is in life). Level 1 is, I think solely, useful to provide you (or others) with more opportunities to do Level 0. Level 2 then is useful to help you with Level 1, etc, so everything stays the same. Your thoughts about how people do too few / too many actions on a certain level is also directly applicable to Level 0. What is different is that all Level n actions now also have a Level 0 component, but I think that's useful to have since it corresponds to a real thing in the world that has previously not been covered. As an example, if you can do a Level 2 & 0 action (such as reading up on computer science which you enjoy doing) instead of a pure Level 0 action, then that should always be a good idea, even if there is a risk of low connectivity back to Levels 1 and 0.

by siIver on Levels of Action | 0 points

I'm not sure this has the best visibility here in Main. I just noted it right now because I haven't looked in Main for ages. And it wasn't featured in discussions, or was it?

by Gunnar_Zarncke on MIRI's 2016 Fundraiser | 0 points

Some people think that any value, if it is the only value, naturally tries to consume all available resources. Even if you explicitly make a satisficing, non-maximizing value (e.g. "make 1000 paperclips", not just "make paperclips"), a rational agent pursuing that value may consume infinite resources making more paperclips just in case it's somehow wrong about already having made 1000 of them, or in case some of the ones it has made are destroyed. On this view, all values need to be able to trade off one another (which implies a common quantitative utility measurement). Even if it seems obvious that the chance you're wrong about having made 1000 paperclips is very small, and you shouldn't invest more resources in that instead of working on your next value, this needs to be explicit and quantified. In this case, since all values inherently conflict with one another, all decisions (between actions that would serve different values) are moral decisions in your terms. I think this is a good intuition pump for why some people think all actions and all decisions are necessarily moral.

by DanArmak on Open thread, October 2011 | 0 points

Clearly I should have asked about actions rather than people. But the Babyeaters were *not* ignorant that they were causing great pain and emotional distress. They may not have known how long it continued, but none of the human characters IIRC suggested this information might change their minds. Because those aliens had a genetic tendency towards non-human preferences, and the (working) society they built strongly reinforced this.

by hairyfigment on Open thread, October 2011 | 2 points