Comment author: siIver 15 October 2016 04:18:51PM 0 points [-]

Well, fuck.

Comment author: Mac 15 October 2016 01:41:14PM *  1 point [-]

Is a unit of suffering less complex than a unit of happiness, and, therefore, more likely to occur in the universe, all else equal? I realize this is an insanely difficult question, but would be interested in current opinions and any related evidence.

Submitting...

Comment author: siIver 15 October 2016 03:47:47PM 0 points [-]

Maybe I misunderstand something, but why would less complexity imply higher frequency when those being capable of experiencing either will generally strive for happiness?

In response to Levels of Action
Comment author: siIver 15 October 2016 07:49:57AM *  0 points [-]

As is, every level is only useful insofar as it helps with lower levels. But Level 1 still isn't the ultimate goal. You don't live to do the dishes, and not – at least not necessarily – to work. I think this model should be extended by Level 0 actions, which are things that directly cause happiness (or, alternatively, whatever else your ultimate goal is in life). Level 1 is, I think solely, useful to provide you (or others) with more opportunities to do Level 0. Level 2 then is useful to help you with Level 1, etc, so everything stays the same. Your thoughts about how people do too few / too many actions on a certain level is also directly applicable to Level 0.

What is different is that all Level n actions now also have a Level 0 component, but I think that's useful to have since it corresponds to a real thing in the world that has previously not been covered. As an example, if you can do a Level 2 & 0 action (such as reading up on computer science which you enjoy doing) instead of a pure Level 0 action, then that should always be a good idea, even if there is a risk of low connectivity back to Levels 1 and 0.

Comment author: username2 11 October 2016 07:28:09PM *  1 point [-]

Nonsense. I believe my life and the lives of people close to me are more important than someone starving in a place whose name I can't pronounce. I just don't assign the same weight to all people. That is perfectly consistent with utilitarianism.

Comment author: siIver 11 October 2016 07:40:04PM *  0 points [-]

Er... no. Utilitarianism prohibits that exact thing by design. That's one of its most important aspects.

Read the definition. This is unambiguous.

Comment author: MrMind 11 October 2016 01:06:33PM 0 points [-]

Is there a good rebuttal to why we don't donate 100% of our income to charity? I mean, as an explanation tribality / near - far are ok, but is there a good justification post-hoc?

Comment author: siIver 11 October 2016 03:50:10PM *  0 points [-]

100% doesn't work because then you starve. If I re-formulate your question to "is there any rebuttal to why we don't donate way more to charity than we currently do" then the answer depends on your belief system. If you are utilitarian, the answer is definitive no. You should spend way more on charity.

Comment author: username2 06 October 2016 09:31:39PM 1 point [-]

I think this article is something that people outside of this community really ought to read.

Interesting. Why people outside of this community? I find it is actually the LW and EA communities that place an exorbitant amount of emphasis on empathy. Most of those I know outside of the rationalist community understand the healthy tradeoff between charitable action and looking out for oneself.

Comment author: siIver 07 October 2016 12:19:11AM 0 points [-]

My observation is that people who are smart generally try to live more ethically, but usually have skewed priorities; e.g. they'll try to support the artists they like and to be decent in earning their money, when they'd fair better just worrying less about all that and donating a bit to the right place every month. Quantitative utility arguments are usually met with rejection.

LW's, on the other hand, seem to be leaning in that direction anyway. Though I'm fairly new to the community, so I could be wrong.

I wouldn't show it to people who lack a "solid" moral base in the first place. They probably fair better in keeping every shred of empathy they have (thinking of how much discrimination still exists today).

Comment author: siIver 06 October 2016 02:35:05PM *  3 points [-]

I think this is the first article in a long time that straight up changed my opinion in a significant way. I always considered empathy a universally good thing – in all forms. In fact I held it as one of the highest values. But the logic of the article is hard to argue with.

I still tentatively disagree that it [emotional empathy] inherently bad. Following what I read, I'd say it's harmful because it's overvalued/misunderstood. The solution would be to recognize that it's an egoistical thing – as I'm writing this I can confirm that I think this now. Whereas cognitive empathy is the selfless thing.

Doing more self-analysis, I think I already understood this on some level, but I was holding the concept of empathy in such high regards that I wasn't able to consciously criticize it.

I think this article is something that people outside of this community really ought to read.

Comment author: Houshalter 05 October 2016 08:43:00PM 2 points [-]

That's not really surprising. Google employs by far the most AI researchers and they have general AI as an actual goal. Deepmind in particular has been pushing for reinforcement learning and general game playing. Which is the first step towards building AI agents that optimize utility functions in complex real world environments, instead of just classifying images or text.

What specific corporation is winning at the moment isn't that relevant. Facebook isn't far behind and has more of a focus on language learning, memory, and reasoning, which are possibly the critical pieces to reaching general intelligence. Microsoft just made headlines for founding a new AI division. Amazon just announced a big competition for the best conversational AIs. Almost every major tech company is trying to get in on this game.

I don't think we are that far away from AGI.

Comment author: siIver 06 October 2016 12:06:05AM 2 points [-]

Is there a relevant difference in how much the eventual winner will incorporate AI safety measures? Or do you think it is merely an issue of actually solving the [friendly AI] problem, and once it is solved, it will surely be used?

Comment author: siIver 05 October 2016 08:50:48AM *  0 points [-]

I actually don't quite agree (this is the first time I found something new to criticize on one of the sequence posts).

To me, it seems like humility as discussed here is inherently a distortion, that when applied, shifts a conclusion in some way. The reason why it can be a good thing is simply that, if a conclusion is flawed, it can shift it into a better place, sort of a counter-measure to existing biases. it is as if I do a bunch of physical measurements and realize that the value I observe is usually a bit too small, so I just add a certain value to my number every time, hoping to move it closer to the correct one.

However, once I fix my measurement tools, that distortion then becomes negative. Similarly, once I actually get my rationality correct, humility will become negative. In this case, there also seems to be a general tool to get your conclusion fixed, which is to use the outside view rather than the inside view. Applying that to the engineer example:

What about the engineer who humbly designs fail-safe mechanisms into machinery, even though he's damn sure the machinery won't fail? This seems like a good kind of humility to me.

If the engineer used the outside view, he should know that humans are fallible and already conclude that he should spend an appropriate amount of time on fail-safe mechanics. If he then applied humility on top of it, thus downplaying his efforts despite having used the outside-view, it should lead him to worry/work on it more than necessary.

Of course, you could reason that in my example, applying the outside view is itself a form of applying humility. My point is simply that even proper humility doesn't seem to cover any new ground. It's not "part of rationality," so to speak. It's simply a useful tool, practically speaking, to apply when you haven't conquered your biases yet. In that sense, I would argue that, ultimately, the correct way to use humility is not at all / automatically without doing anything.

Comment author: Lumifer 03 October 2016 07:08:41PM 2 points [-]

'username2' is a community pseudonymous account that exists to be used by anyone who knows how to access it. You should expect that posts with this username come from different people.

Comment author: siIver 03 October 2016 08:19:42PM 1 point [-]

Ah, I see. Thanks.

View more: Next