Wei_Dai comments on Thoughts on the Singularity Institute (SI) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1270)
I find it unfortunate that none of the SIAI research associates have engaged very deeply in this debate, even LessWrong regulars like Nesov and cousin_it. This is part of the reason why I was reluctant to accept (and ultimately declined) when SI invited me to become a research associate, that I would feel less free to to speak up both in support of SI and in criticism of it.
I don't think this is SI's fault, but perhaps there are things it could do to lessen this downside of the research associate program. For example it could explicitly encourage the research associates to publicly criticize SI and to disagree with its official positions, and make it clear that no associate will be blamed if someone mistook their statements to be official SI positions or saw them as reflecting badly on SI in general. I also write this comment because just being consciously aware of this bias (in favor of staying silent) may help to counteract it.
Not sure about the others, but as for me, at some point this spring I realized that talking about saving the world makes me really upset and I'm better off avoiding the whole topic.
Would it upset you to talk about why talking about saving the world makes you upset?
It would appear that cousin_it believes we're screwed. It's tempting to argue that this would, overall, be an argument against the effectiveness of the SI program. However, that's probably not true, because we could be 99% screwed and the remaining 1% could depend on SI; this would be a depressing fact, yet still justify supporting the SI.
(Personally, I agree with the poster about the problems with SI, but I'm just laying it out. Responding to weidai rather than cousinit because I don't want to upset the latter unnecessarily.)
we could be 99.9% screwed and the remaining 0.1% could be caused by donating to SI and it discouraging some avenue to survival.
Actually the way i see it, the most stark symptoms of SI being diseased is the certainty in intuitions even though there isn't some mechanism for such intuitions to be based on some subconscious but valid reasoning, and abundance of biases affecting the intuitions. There's nothing rational about summarizing a list of biases then proclaiming now they dont apply to me and i can use my intuitions.
Yes.
It's because talking about the singularity and end-of-world in near mode for a large amount of time makes you alieve that it's going to happen. In the same way that it actually happening would make you alieve it, but talking about it once and believing it then never thinking about it explicitly again wouldn't.
Probably not wise to categorically tell someone the reasons behind their feelings when you're underinformed, and probably not kind to ruminate on the subject when you can expect it to be unpleasant.
Neither wise or epistemically sound practice.
It is perfectly acceptable to make a reply to a publicly made comment that was itself freely volunteered. If the subject of there being subjects which are unpleasant to discuss is itself terribly unpleasant to discuss then it is cousin_it's prerogative to not bring up the subject on a forum where analysis of the subject is both relevant and potentially useful for others.
I disagree that it is in general unacceptable to post information that you would not like to discuss beyond a certain point.
Without further clarification one could reasonably assume that cousin_it was okay with discussing the subject at one removal, as you suggest, but as it happens several days before the great-grandparent cousin_it explicitly stated that it would be upsetting to discuss this topic.
I would not make (and haven't made) the claim as you have stated it.
When that is the case - and if I happened to see it before making a contribution - I would refrain from making any direct reply to the user or to discuss him as an instance when talking about the subject (all else being equal). I would still discuss the subject itself using the same criteria for posting that I always use. Mind you I would probably have already have refrained from directly discussing the user due to the aforementioned epistemic absurdity and presumptuousness.
What you claimed was that "It is perfectly acceptable to make a reply to a publicly made comment that was itself freely volunteered", and that if someone didn't want to discuss something then they shouldn't have brought it up. In context, however, this was a reply to me saying it was probably unkind to belabor a subject to someone who'd expressed that they find the subject upsetting, which you now seem to be saying you agree with. So what are you taking issue with? I certainly didn't mean to imply that if someone finds a subject uncomfortable to discuss, personally, then that means that others should stop discussing it at all, but this point isn't raised in your great-grandparent comment, and I hope my meaning was clear from the context.
ETA: I have not voted on your comments here.
I have not voted here either. As of now the conversation is all at "0" which is how I would prefer it.
Just wanted to clarify, as at the time your posts had both been downvoted.
I have personally felt the same feelings and I think I have pinned down the reason. I welcome alternative theories, in the spirit of rational debate rather than polite silence.
That you may have discovered the reason that you felt this way does not mean that you have discovered the reason another specific person felt a similar way. In fact, they may not even be unaware of the causes of their feelings.
Sure. That's why I said: "I welcome alternative theories" (including theories about there being multiple different reasons which may apply to different extents to different people). Do you have one?
Missed the point. Do you understand that you shouldn't have been confident you knew why cousin_it felt a particular way? Beyond that, personally I'm not all that interested in theorizing about the reasons, but if you really want to know you could just ask.
Sorry I wasn't implying very strong confidence. I would give a probability of, say, 65% that my reason is the principal cause of the feelings of Cousin_it
I don't usually engage in potentially protracted debates lately. A very short summary of my disagreement with Holden's object-level argument part of the post is (1) I don't see in what way can the idea of powerful Tool AI be usefully different from that of Oracle AI, and it seems like the connotations of "Tool AI" that distinguish it from "Oracle AI" follow from an implicit sense of it not having too much optimization power, so it might be impossible for a Tool AI to both be powerful and hold the characteristics suggested in the post; (1a) the description of Tool AI denies it goals/intentionality and other words, but I don't see what they mean apart from optimization power, and so I don't know how to use them to characterize Tool AI; (2) the potential danger of having a powerful Tool/Oracle AI around is such that aiming at their development doesn't seem like a good idea; (3) I don't see how a Tool/Oracle AI could be sufficiently helpful to break the philosophical part of the FAI problem, since we don't even know which questions to ask.
Since Holden stated that he's probably not going to (interactively) engage the comments to this post, and writing this up in a self-contained way is a lot of work, I'm going to leave this task to the people who usually write up SingInst outreach papers.