Comment author: username2 11 October 2016 07:28:09PM *  1 point [-]

Nonsense. I believe my life and the lives of people close to me are more important than someone starving in a place whose name I can't pronounce. I just don't assign the same weight to all people. That is perfectly consistent with utilitarianism.

Comment author: siIver 11 October 2016 07:40:04PM *  0 points [-]

Er... no. Utilitarianism prohibits that exact thing by design. That's one of its most important aspects.

Read the definition. This is unambiguous.

Comment author: MrMind 11 October 2016 01:06:33PM 1 point [-]

Is there a good rebuttal to why we don't donate 100% of our income to charity? I mean, as an explanation tribality / near - far are ok, but is there a good justification post-hoc?

Comment author: siIver 11 October 2016 03:50:10PM *  0 points [-]

100% doesn't work because then you starve. If I re-formulate your question to "is there any rebuttal to why we don't donate way more to charity than we currently do" then the answer depends on your belief system. If you are utilitarian, the answer is definitive no. You should spend way more on charity.

Comment author: username2 06 October 2016 09:31:39PM 1 point [-]

I think this article is something that people outside of this community really ought to read.

Interesting. Why people outside of this community? I find it is actually the LW and EA communities that place an exorbitant amount of emphasis on empathy. Most of those I know outside of the rationalist community understand the healthy tradeoff between charitable action and looking out for oneself.

Comment author: siIver 07 October 2016 12:19:11AM 0 points [-]

My observation is that people who are smart generally try to live more ethically, but usually have skewed priorities; e.g. they'll try to support the artists they like and to be decent in earning their money, when they'd fair better just worrying less about all that and donating a bit to the right place every month. Quantitative utility arguments are usually met with rejection.

LW's, on the other hand, seem to be leaning in that direction anyway. Though I'm fairly new to the community, so I could be wrong.

I wouldn't show it to people who lack a "solid" moral base in the first place. They probably fair better in keeping every shred of empathy they have (thinking of how much discrimination still exists today).

Comment author: siIver 06 October 2016 02:35:05PM *  3 points [-]

I think this is the first article in a long time that straight up changed my opinion in a significant way. I always considered empathy a universally good thing – in all forms. In fact I held it as one of the highest values. But the logic of the article is hard to argue with.

I still tentatively disagree that it [emotional empathy] inherently bad. Following what I read, I'd say it's harmful because it's overvalued/misunderstood. The solution would be to recognize that it's an egoistical thing – as I'm writing this I can confirm that I think this now. Whereas cognitive empathy is the selfless thing.

Doing more self-analysis, I think I already understood this on some level, but I was holding the concept of empathy in such high regards that I wasn't able to consciously criticize it.

I think this article is something that people outside of this community really ought to read.

Comment author: Houshalter 05 October 2016 08:43:00PM 2 points [-]

That's not really surprising. Google employs by far the most AI researchers and they have general AI as an actual goal. Deepmind in particular has been pushing for reinforcement learning and general game playing. Which is the first step towards building AI agents that optimize utility functions in complex real world environments, instead of just classifying images or text.

What specific corporation is winning at the moment isn't that relevant. Facebook isn't far behind and has more of a focus on language learning, memory, and reasoning, which are possibly the critical pieces to reaching general intelligence. Microsoft just made headlines for founding a new AI division. Amazon just announced a big competition for the best conversational AIs. Almost every major tech company is trying to get in on this game.

I don't think we are that far away from AGI.

Comment author: siIver 06 October 2016 12:06:05AM 2 points [-]

Is there a relevant difference in how much the eventual winner will incorporate AI safety measures? Or do you think it is merely an issue of actually solving the [friendly AI] problem, and once it is solved, it will surely be used?

Comment author: siIver 05 October 2016 08:50:48AM *  0 points [-]

I actually don't quite agree (this is the first time I found something new to criticize on one of the sequence posts).

To me, it seems like humility as discussed here is inherently a distortion, that when applied, shifts a conclusion in some way. The reason why it can be a good thing is simply that, if a conclusion is flawed, it can shift it into a better place, sort of a counter-measure to existing biases. it is as if I do a bunch of physical measurements and realize that the value I observe is usually a bit too small, so I just add a certain value to my number every time, hoping to move it closer to the correct one.

However, once I fix my measurement tools, that distortion then becomes negative. Similarly, once I actually get my rationality correct, humility will become negative. In this case, there also seems to be a general tool to get your conclusion fixed, which is to use the outside view rather than the inside view. Applying that to the engineer example:

What about the engineer who humbly designs fail-safe mechanisms into machinery, even though he's damn sure the machinery won't fail? This seems like a good kind of humility to me.

If the engineer used the outside view, he should know that humans are fallible and already conclude that he should spend an appropriate amount of time on fail-safe mechanics. If he then applied humility on top of it, thus downplaying his efforts despite having used the outside-view, it should lead him to worry/work on it more than necessary.

Of course, you could reason that in my example, applying the outside view is itself a form of applying humility. My point is simply that even proper humility doesn't seem to cover any new ground. It's not "part of rationality," so to speak. It's simply a useful tool, practically speaking, to apply when you haven't conquered your biases yet. In that sense, I would argue that, ultimately, the correct way to use humility is not at all / automatically without doing anything.

Comment author: Lumifer 03 October 2016 07:08:41PM 2 points [-]

'username2' is a community pseudonymous account that exists to be used by anyone who knows how to access it. You should expect that posts with this username come from different people.

Comment author: siIver 03 October 2016 08:19:42PM 1 point [-]

Ah, I see. Thanks.

Comment author: username2 03 October 2016 12:08:16PM 4 points [-]

How do you deal with embarrassment of having to learn as an adult things that most people learn in their childhood? I'm talking about things that you can't learn alone in private, such as swimming, riding a bicycle and things like that.

Comment author: siIver 03 October 2016 08:16:16PM *  2 points [-]

To also offer help; this might seem incredibly obvious, but a lot of people still don't do it: be conscious about the problem and actively make plans addressing it.

E.g. if you know ahead of time that a situation will come up where you'd feel embarrassed, make an actual calculation before of what you'd have to do to avoid it entirely. If you decide that you have to do it, maybe have a plan to minimize the embarrassment somehow (it depends on the context). None of that will solve the issue, but actively trying to find loopholes and such rather than going into situations blindly could reduce harm.

You could also consider ways to solve some instances of the problem permanently while dodging the embarrassment, e.g. make active tries to learn how to ride a bike, either on your own or with a person who's willing and with whom you'd feel comfortable, if such a person exists.

Comment author: username2 03 October 2016 12:27:40PM 0 points [-]

Please forgive the snarky response but... Don't be embarrassed. Embarrassment is in your head only.

Comment author: siIver 03 October 2016 06:47:33PM 1 point [-]

Every emotion is in your head only, so that's not useful advise. The same argument could be made for virtually every form of social insecurity.

If I may ask -- you are the same registered user who made the initial comment. Why reply to yourself? Are you multiple people using the same account?

Comment author: DataPacRat 19 September 2016 06:35:24PM 10 points [-]

As a cryonicist, I'm drafting out a text describing my revival preferences and requests, to be stored along with my other paperwork. (Oddly enough, this isn't a standard practice.) The current draft is here. I'm currently seeking suggestions for improvement, and a lot of the people around here seem to have good heads on their shoulders, so I thought I'd ask for comments here. Any thoughts?

Comment author: siIver 20 September 2016 03:49:27PM 1 point [-]

Great idea. I will probably do a similar thing myself at some point, and it will probably look similar to yours.

The only thing I see that might be missing is advise for a scenario in which the odds of revival go down with time, creating pressure to revive you sooner rather than later. In that case your wishes may contradict with each other (since later revival could still increase the odds of living indefinitely). That seems far fetched but not entirely impossible.

Other than that, I'd say be more specific to avoid any possible misinterpretation. You never know how much bureaucracy will be involved in the process when it finally happens.

View more: Next