Vladimir_Nesov comments on Best career models for doing research? - Less Wrong

27 Post author: Kaj_Sotala 07 December 2010 04:25PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (999)

You are viewing a single comment's thread. Show more comments above.

Comment author: waitingforgodel 08 December 2010 06:40:49AM -2 points [-]

I think you/we're fine -- just alternate between two tabs when replying, and paste it to the rationalwiki if it gets deleted.

Don't let EY chill your free speech -- this is supposed to be a community blog devoted to rationality... not a SIAI blog where comments are deleted whenever convenient.

Besides, it's looking like after the Roko thing they've decided to cut back on such silliness.

Comment author: Vladimir_Nesov 08 December 2010 11:22:39AM *  9 points [-]

Don't let EY chill your free speech -- this is supposed to be a community blog devoted to rationality... not a SIAI blog where comments are deleted whenever convenient.

You are compartmentalizing. What you should be asking yourself is whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech. That the decision conflicts with freedom of speech doesn't necessarily mean that it's incorrect, and if the correct decision conflicts with freedom of speech, or has you kill a thousand children (estimation of its correctness must of course take this consequence into account), it's still correct and should be taken.

(There is only one proper criterion to anyone's actions, goodness of consequences, and if any normally useful heuristic stays in the way, it has to be put down, not because one is opposed to that heuristic, but because in a given situation, it doesn't yield the correct decision. )

(This is a note about a problem in your argument, not an argument for correctness of EY's decision. My argument for correctness of EY's decision is here and here.)

Comment author: wedrifid 08 December 2010 11:52:53AM *  4 points [-]

You are compartmentalizing.

This is possible but by no means assured. It is also possible that he simply didn't choose to write a full evaluation of consequences in this particular comment.

Comment author: xamdam 08 December 2010 08:37:17PM *  2 points [-]

whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech.

Sounds like a good argument for WikiLeaks dilemma (which is of course confused by the possibility the government is lying their asses off about potential harm)

Comment author: Vladimir_Nesov 08 December 2010 08:43:34PM *  0 points [-]

The question with WikiLeaks is about long-term consequences. As I understand it, the (sane) arguments in favor can be summarized as stating that expected long-term good outweighs expected short-term harm. It's difficult (for me) to estimate whether it's so.

Comment author: xamdam 09 December 2010 04:05:40PM *  0 points [-]

I suspect it's also difficult for Julian (or pretty much anybody) to estimate these things; I guess intelligent people will just have to make best guesses about this type of stuff. In this specific case a rationalist would be very cautious of "having an agenda", as there is significant opportunity to do harm either way.

Comment author: Vladimir_Golovin 08 December 2010 12:01:04PM 2 points [-]

What you should be asking yourself is whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech.

Upvoted. This just helped me get unstuck on a problem I've been procrastinating on.

Comment author: waitingforgodel 08 December 2010 11:56:28AM 2 points [-]

(There is only one proper criterion to anyone's actions, goodness of consequences, and if any normally useful heuristic stays in the way, it has to be put down, not because one is opposed to that heuristic, but because in a given situation, it doesn't yield the correct decision.)

Very much agree btw

Comment author: red75 08 December 2010 03:21:12PM -1 points [-]

Shouldn't AI researchers precommit to not build AI capable of this kind of acausal self-creation? This will lower chances of disaster both causally and acausally.

And please, define how do you tell moral heuristics and moral values apart. E.g. which is "don't change moral values of humans by wireheading"?