Perplexed comments on Best career models for doing research? - Less Wrong

27 Post author: Kaj_Sotala 07 December 2010 04:25PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (999)

You are viewing a single comment's thread. Show more comments above.

Comment author: Perplexed 08 December 2010 06:39:52PM 3 points [-]

Ok by me. It is pretty obvious by this point that there is no evil conspiracy involved here. But I think the lesson remains, I you delete something, even if it is just because you regret posting it, you create more confusion than you remove.

Comment author: waitingforgodel 09 December 2010 04:18:40AM 2 points [-]

I think the question you should be asking is less about evil conspiracies, and more about what kind of organization SIAI is -- what would they tell you about, and what would they lie to you about.

Comment author: XiXiDu 09 December 2010 10:54:12AM *  4 points [-]

If the forbidden topic would be made public (and people would believe it), it would result in a steep rise of donations towards the SIAI. That alone is enough to conclude that the SIAI is not trying to hold back something that would discredit it as an organisation concerned with charitable objectives. The censoring of the information was in accordance with their goal of trying to prevent unfriendly artificial intelligence. Making the subject matter public did already harm some people and could harm people in future.

Comment author: David_Gerard 09 December 2010 11:04:29AM *  7 points [-]

But the forbidden topic is already public. All the effects that would follow from it being public would already follow. THE HORSE HAS BOLTED. It's entirely unclear to me what pretending it hasn't does for the problem or the credibility of the SIAI.

Comment author: XiXiDu 09 December 2010 11:14:11AM 3 points [-]

It is not as public as you think. If it was then people like waitingforgodel wouldn't ask about it.

I'm just trying to figure out how to behave without being able talk about it directly. It's also really interesting on many levels.

Comment author: wedrifid 09 December 2010 11:33:07AM *  8 points [-]

It is not as public as you think.

Rather more public than a long forgotten counterfactual discussion collecting dust in the blog's history books would be. :P

Comment author: David_Gerard 09 December 2010 02:01:21PM *  3 points [-]

It is not as public as you think.

Rather more public than a long forgotten counterfactual discussion collecting dust in the blog's history books would be. :P

Precisely. The place to hide a needle is in a large stack of needles.

The choice here was between "bad" and "worse" - a trolley problem, a lose-lose hypothetical - and they appear to have chosen "worse".

Comment author: wedrifid 09 December 2010 03:32:38PM *  6 points [-]

Precisely. The place to hide a needle is in a large stack of needles.

I prefer to outsource my needle-keeping security to Clippy in exchange for allowing certain 'bending' liberties from time to time. :)

Comment author: David_Gerard 09 December 2010 03:38:09PM *  4 points [-]

Upvoted for LOL value. We'll tell Clippy the terrible, no good, very bad idea with reasons as to why this would hamper the production of paperclips.

"Hi! I see you've accidentally the whole uFAI! Would you like help turning it into paperclips?"

Comment author: wedrifid 09 December 2010 03:44:19PM 3 points [-]

"Hi! I see you've accidentally the whole uFAI! Would you like help turning it into paperclips?"

Brilliant.

Comment author: TheOtherDave 09 December 2010 04:20:29PM 2 points [-]

Of course, if Clippy were clever he would then offer to sell SIAI a commitment to never release the UFAI in exchange for a commitment to produce a fixed number of paperclips per year, in perpetuity.

Admittedly, his mastery of human signaling probably isn't nuanced enough to prevent that from sounding like blackmail.

Comment deleted 09 December 2010 11:19:57AM *  [-]
Comment author: XiXiDu 09 December 2010 11:26:37AM 2 points [-]

Yeah, I thought about that as well. Trying to suppress it made it much more popular and gave it a lot of credibility. If they decided to act in such a way deliberately, that be fascinating. But that sounds like one crazy conspiracy theory to me.

Comment author: David_Gerard 09 December 2010 11:33:36AM *  7 points [-]

I don't think it gave it a lot of credibility. Everyone I can think of who isn't an AI researcher or LW regular who's read it has immediately thought "that's ridiculous. You're seriously concerned about this as a likely consequence? Have you even heard of the Old Testament, or Harlan Ellison? Do you think your AI will avoid reading either?" Note, not the idea itself, but that SIAI took the idea so seriously it suppressed it and keeps trying to. This does not make SIAI look more credible, but less because it looks strange.

These are the people running a site about refining the art of rationality; that makes discussion of this apparent spectacular multi-level failure directly on-topic. It's also become a defining moment in the history of LessWrong and will be in every history of the site forever. Perhaps there's some Xanatos retcon by which this can be made to work.

Comment author: XiXiDu 09 December 2010 11:44:30AM *  3 points [-]

I just have a hard time to believe that they could be so wrong, people who write essays like this. That's why I allow for the possibility that they are right and that I simply do not understand the issue. Can you rule out that possibility? And if that was the case, what would it mean to spread it even further? You see, that's my problem.

Comment author: TheOtherDave 09 December 2010 02:47:28PM 3 points [-]

There is no problem.

If you observe an action (A) that you judge so absurd that it casts doubt on the agent's (G) rationality, then your confidence (C1) in G's rationality should decrease. If C1 was previously high, then your confidence (C2) in your judgment of A's absurdity should decrease.

So if someone you strongly trust to be rational does something you strongly suspect to be absurd, the end result ought to be that your trust and your suspicions are both weakened. Then you can ask yourself whether, after that modification, you still trust G's rationality enough to believe that there exist good reasons for A.

The only reason it feels like a problem is that human brains aren't good at this. It sometimes helps to write it all down on paper, but mostly it's just something to practice until it gets easier.

In the meantime, what I would recommend is giving some careful thought to why you trust G, and why you think A is absurd, independent of each other. That is: what's your evidence? Are C1 and C2 at all calibrated to observed events?

If you conclude at the end of it that they one or the other is unjustified, your problem dissolves and you know which way to jump. No problem.

If you conclude that they are both justified, then your best bet is probably to assume the existence of either evidence or arguments that you're unaware of (more or less as you're doing now)... not because "you can't rule out the possibility" but because it seems more likely than the alternatives. Again, no problem.

And the fact that other people don't end up in the same place simply reflects the fact that their prior confidence was different, presumably because their experiences were different and they don't have perfect trust in everyone's perfect Bayesianness. Again, no problem... you simply disagree.

Working out where you stand can be a useful exercise. In my own experience, I find it significantly diminishes my impulse to argue the point past where anything new is being said, which generally makes me happier.

This comment is also relevant.

Comment author: David_Gerard 09 December 2010 11:49:17AM *  6 points [-]

Indeed. On the other hand, humans frequently use intelligence to do much stupider things than they could have done without that degree of intelligence. Previous brilliance means that future strange ideas should be taken seriously, but not that the future ideas must be even more brilliant because they look so stupid. Ray Kurzweil is an excellent example - an undoubted genius of real achievements, but also now undoubtedly completely off the rails and well into pseudoscience. (Alkaline water!)

Comment author: timtyler 09 December 2010 06:55:59PM *  0 points [-]

What issue? The forbidden one? You are not even supposed to be thinking about that! For pennance, go and say 30 "Hail Yudkowskys"!

Comment author: shokwave 09 December 2010 11:57:39AM *  1 point [-]

Everyone I can think of who isn't an AI researcher or LW regular who's read it has immediately thought "that's ridiculous. You're seriously concerned about this as a likely consequence?"

You could make a similar comment about cryonics. "Everyone I can think of who isn't a cryonics project member or LW regular who's read [hypothetical cryonics proposal] has immediately thought "that's ridiculous. You're seriously considering this possibility?". "People think it's ridiculous" is not always a good argument against it.

Consider that whoever made the decision probably made it according to consequentialist ethics; the consequences of people taking the idea seriously would be worse than the consequences of censorship. As many consequentialist decisions tend to, it failed to take into account the full consequences of breaking with deontological ethics ("no censorship" is a pretty strong injunction). But LessWrong is maybe the one place on the internet you could expect not to suffer for breaking from deontological ethics.

This does not make SIAI look more credible, but less because it looks strange.

Again, strange from a deontologist's perspective. If you're a deontologist, okay, your objection to the practice has been noted.

The perfect Bayesian consequentialist, however, would look at the decision, estimate the chances of the decision-maker being irrational (their credibility), and promptly revise their probability estimate of 'bad idea is actually dangerous' upwards, enough to approve of censorship. Nothing strange there. You appear to be downgrading SIAI's credibility because it takes an idea seriously that you don't - I don't think you have enough evidence to conclude that they are reasoning imperfectly.

Comment author: David_Gerard 09 December 2010 02:03:47PM *  8 points [-]

I'm speaking of convincing people who don't already agree with them. SIAI and LW look silly now in ways they didn't before.

There may be, as you posit, a good and convincing explanation for the apparently really stupid behaviour. However, to convince said outsiders (who are the ones with the currencies of money and attention), the explanation has to actually be made to said outsiders in an examinable step-by-step fashion. Otherwise they're well within rights of reasonable discussion not to be convinced. There's a lot of cranks vying for attention and money, and an organisation has to clearly show itself as better than that to avoid losing.

Comment author: Vaniver 09 December 2010 04:41:54PM 2 points [-]

The perfect Bayesian consequentialist, however, would look at the decision, estimate the chances of the decision-maker being irrational (their credibility), and promptly revise their probability estimate of 'bad idea is actually dangerous' upwards, enough to approve of censorship.

There are two things going on here, and you're missing the other, important one. When a Bayesian consequentialist sees someone break a rule, they perform two operations- reduce the credibility of the person breaking the rule by the damage done, and increase the probability that the rule-breaking was justified by the credibility of the rule-breaker. It's generally a good idea to do the credibility-reduction first.

Keep in mind that credibility is constructed out of actions (and, to a lesser extent, words), and that people make mistakes. This sounds like captainitis, not wisdom.

Comment author: timtyler 09 December 2010 06:53:44PM 2 points [-]

It was left up for ages before the censorship. The Streisand effect is well known. Yes, this is a crazy kind of marketing stunt - but also one that shows Yu'El's compassion for the tender and unprotected minds of his flock - his power over the other participants - and one that adds to the community folklore.

Comment author: David_Gerard 09 December 2010 11:44:28AM *  5 points [-]

If the forbidden topic would be made public (and people would believe it), it would result in a steep rise of donations towards the SIAI.

I really don't see how that follows. Will more of the public take it seriously? As I have noted, so far the reaction from people outside SIAI/LW has been "They did WHAT? Are they IDIOTS?"

The censoring of the information was in accordance with their goal of trying to prevent unfriendly artificial intelligence.

That doesn't make it not stupid or not counterproductive. Sincere stupidity is not less stupid than insincere stupidity. Indeed, sincere stupidity is more problematic in my experience as the sincere are less likely to back down, whereas the insincere will more quickly hop to a different idea.

Making the subject matter public did already harm some people

Citation needed.

and could harm people in future.

Citation needed.

Comment author: XiXiDu 09 December 2010 01:07:21PM 5 points [-]

Citation needed.

I sent you another PM.

Comment author: David_Gerard 09 December 2010 01:49:36PM *  4 points [-]

Hmm, okay. But that, I suggest, appears to have been a case of reasoning oneself stupid.

It does, of course, account for SIAI continuing to attempt to secure the stable doors after the horse has been dancing around in a field for several months taunting them with "COME ON IF YOU THINK YOU'RE HARD ENOUGH."

(I upvoted XiXiDu's comment here because he did actually supply a substantive response in PM, well deserving of a vote, and I felt this should be encouraged by reward.)