FAWS comments on Best career models for doing research? - Less Wrong

27 Post author: Kaj_Sotala 07 December 2010 04:25PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (999)

You are viewing a single comment's thread. Show more comments above.

Comment author: Nick_Tarleton 08 December 2010 04:54:34PM *  6 points [-]

I don't think Roko should have been requested to delete his comment. I don't think Roko should have conceded to deleting his comment.

Roko was not requested to delete his comment. See this parallel thread. (I would appreciate it if you would edit your comment to note this, so readers who miss this comment don't have a false belief reinforced.) (ETA: thanks)

The correct reaction when someone posts something scandalous like

I was once criticized by a senior singinst member for not being prepared to be tortured or raped for the cause

is not to attempt to erase it, even if that was possible, but to reveal the context.... Now SIAI should save face not by asking a moderator to delete wfg's reposts....

Agreed (and I think the chance of wfg's reposts being deleted is very low, because most people get this). Unfortunately, I know nothing about the alleged event (Roko may be misdescribing it, as he misdescribed my message to him) or its context.

Comment author: Bongo 08 December 2010 05:28:22PM *  1 point [-]

Roko said he was asked. You didn't ask him but maybe someone else did?

Comment author: Nick_Tarleton 08 December 2010 05:59:11PM *  4 points [-]

Roko's reply to me strongly suggested that he interpreted my message as requesting deletion, and that I was the cause of him deleting it. I doubt anyone at SIAI would have explicitly requested deletion.

Comment author: FormallyknownasRoko 08 December 2010 06:09:05PM 5 points [-]

I can confirm that I was not asked to delete the comment but did so voluntarily.

Comment author: Vladimir_Nesov 08 December 2010 07:47:02PM 7 points [-]

I think you are too trigger-happy.

Comment author: Perplexed 08 December 2010 06:10:43PM 2 points [-]

I'm wondering whether you, Nick, have learned anything from this experience - something perhaps about how attempting to hide something is almost always counterproductive?

Of course, Roko contributed here by deleting the message, you didn't create this mess by yourself. But you sure have helped. :)

Comment author: FormallyknownasRoko 08 December 2010 06:12:53PM *  9 points [-]

Well, look, I deleted it of my own accord, but only after being prompted that it was a bad thing to have posted. Can we just drop this? It makes me look like even more of a troublemaker than I already look like, and all I really want to do is finish the efficient charity competition then get on with life outside teh intenetz.

Comment author: XiXiDu 09 December 2010 01:59:05PM 5 points [-]

It makes me look like even more of a troublemaker...

How so? I've just reread some of your comments on your now deleted post. It looks like you honestly tried to get the SIAI to put safeguards into CEV. Given that the idea spread to many people by now, don't you think it would be acceptably to discuss the matter before one or more people take it serious or even consider to implement it deliberately?

Comment author: FormallyknownasRoko 09 December 2010 06:30:21PM 0 points [-]

I don't think it is a good idea to discuss it. I think that the costs outweigh the benefits. The costs are very big. Benefits marginal.

Comment author: XiXiDu 09 December 2010 02:03:26PM 7 points [-]

Will you at least publicly state that you precommit, on behalf of CEV, to not apply negative incentives in this case? (Roko, Jul 24, 2010 1:37 PM)

This is very important. If the SIAI is the organisation to solve the friendly AI problem and implement CEV then it should be subject to public examination, especially if they ask for money.

Comment author: David_Gerard 09 December 2010 02:32:05PM *  6 points [-]

The current evidence that anyone anywhere can implement CEV is two papers in six years that talk about it a bit. There appears to have been nothing else from SIAI and no-one else in philosophy appears interested.

If that's all there is for CEV in six years, and AI is on the order of thirty years away, then (approximately) we're dead.

This is rather disappointing, as if CEV is possible then a non-artificial general intelligence should be able to implement it, at least partially. And we have those. The reason for CEV is (as I understand it) the danger of the AI going FOOM before it cares about humans. However, human general intelligences don't go FOOM but should be able to do the work for CEV. If they know what that work is.

Addendum: I see others have been asking "but what do you actually mean?" for a couple of years now.

Comment author: Nick_Tarleton 09 December 2010 05:34:45PM *  7 points [-]

The current evidence that anyone anywhere can implement CEV is two papers in six years that talk about it a bit. There appears to have been nothing else from SIAI and no-one else in philosophy appears interested.

If that's all there is for CEV in six years, and AI is on the order of thirty years away, then (approximately) we're dead.

This strikes me as a demand for particular proof. SIAI is small (and was much smaller until the last year or two), the set of people engaged in FAI research is smaller, Eliezer has chosen to focus on writing about rationality over research for nearly four years, and FAI is a huge problem, in which any specific subproblem should be expected to be underdeveloped at this early stage. And while I and others expect work to speed up in the near future with Eliezer's attention and better organization, yes, we probably are dead.

The reason for CEV is (as I understand it) the danger of the AI going FOOM before it cares about humans.

Somewhat nitpickingly, this is a reason for FAI in general. CEV is attractive mostly for moving as much work from the designers to the FAI as possible, reducing the potential for uncorrectable error, and being fairer than letting the designers lay out an object-level goal system.

This is rather disappointing, as if CEV is possible then a non-artificial general intelligence should be able to implement it, at least partially.... However, human general intelligences don't go FOOM but should be able to do the work for CEV. If they know what that work is.

This sounds interesting; do you think you could expand?

Comment author: David_Gerard 09 December 2010 05:40:33PM *  1 point [-]

This strikes me as a demand for particular proof.

It wasn't intended to be - more incredulity. I thought this was a really important piece of the puzzle, so expected there'd be something at all by now. I appreciate your point: that this is a ridiculously huge problem and SIAI is ridiculously small.

However, human general intelligences don't go FOOM but should be able to do the work for CEV. If they know what that work is.

This sounds interesting; do you think you could expand?

I meant that, as I understand it, CEV is what is fed to the seed AI. Or the AI does the work to ascertain the CEV. It requires an intelligence to ascertain the CEV, but I'd think the ascertaining process would be reasonably set out once we had an intelligence on hand, artificial or no. Or the process to get to the ascertaining process.

I thought we needed the CEV before the AI goes FOOM, because it's too late after. That implies it doesn't take a superintelligence to work it out.

Thus: CEV would have to be a process that mere human-level intelligences could apply. That would be a useful process to have, and doesn't require first creating an AI.

I must point out that my statements on the subject are based in curiosity, ignorance and extrapolation from what little I do know, and I'm asking (probably annoyingly) for more to work with.

Comment author: Nick_Tarleton 09 December 2010 05:47:30PM *  4 points [-]

"CEV" can (unfortunately) refer to either CEV the process of determining what humans would want if we knew more etc., or the volition of humanity output by running that process. It sounds to me like you're conflating these. The process is part of the seed AI and is needed before it goes FOOM, but the output naturally is neither, and there's no guarantee or demand that the process be capable of being executed by humans.

Comment author: FormallyknownasRoko 09 December 2010 06:19:31PM *  5 points [-]

I have recieved assurances that SIAI will go to significant efforts not to do nasty things, and I believe them. Private assurances given sincerely are, in my opinion, the best we can hope for, and better than we are likely to get from any other entity involved in this.

Besides, I think that XiXiDu, et al are complaining about the difference between cotton and silk, when what is actually likely to happen is more like a big kick in the teeth from reality. SIAI is imperfect. Yes. Well done. Nothing is perfect. At least cut them a bit of slack.

Comment author: timtyler 09 December 2010 06:32:19PM *  2 points [-]

I have recieved assurances that SIAI will go to significant efforts not to do nasty things, and I believe them. Private assurances given sincerely are the best we can hope for, and better than we are likely to get from any other entity involved in this.

What?!? Open source code - under a permissive license - is the traditional way to signal that you are not going to run off into the sunset with the fruits of a programming effort. Private assurances are usually worth diddly-squat by comparison.

Comment author: FormallyknownasRoko 09 December 2010 06:34:02PM *  1 point [-]

I think that you don't realize just how bad the situation is. You want that silken sheet. Rude awakening methinks. Also open-source not neccessarily good for FAI in any case.

Comment author: XiXiDu 09 December 2010 07:26:58PM 4 points [-]

I think that you don't realize just how bad the situation is.

I don't think that you realize how bad it is. I'd rather have the universe being paperclipped than supporting the SIAI if that means that I might be tortured for the rest of infinity!

Comment author: timtyler 09 December 2010 08:09:39PM *  -2 points [-]

Also open-source not neccessarily good for FAI in any case.

You can have your private assurances - and I will have my open-source software.

Gollum gave his private assurances to Frodo - and we all know how that turned out.

If someone solicits for you to "trust in me", alarm bells should start ringing immediately. If you really think that is "the best we can hope for", then perhaps revisit that.

Comment author: Perplexed 08 December 2010 06:39:52PM 3 points [-]

Ok by me. It is pretty obvious by this point that there is no evil conspiracy involved here. But I think the lesson remains, I you delete something, even if it is just because you regret posting it, you create more confusion than you remove.

Comment author: waitingforgodel 09 December 2010 04:18:40AM 2 points [-]

I think the question you should be asking is less about evil conspiracies, and more about what kind of organization SIAI is -- what would they tell you about, and what would they lie to you about.

Comment author: XiXiDu 09 December 2010 10:54:12AM *  4 points [-]

If the forbidden topic would be made public (and people would believe it), it would result in a steep rise of donations towards the SIAI. That alone is enough to conclude that the SIAI is not trying to hold back something that would discredit it as an organisation concerned with charitable objectives. The censoring of the information was in accordance with their goal of trying to prevent unfriendly artificial intelligence. Making the subject matter public did already harm some people and could harm people in future.

Comment author: David_Gerard 09 December 2010 11:04:29AM *  7 points [-]

But the forbidden topic is already public. All the effects that would follow from it being public would already follow. THE HORSE HAS BOLTED. It's entirely unclear to me what pretending it hasn't does for the problem or the credibility of the SIAI.

Comment author: XiXiDu 09 December 2010 11:14:11AM 3 points [-]

It is not as public as you think. If it was then people like waitingforgodel wouldn't ask about it.

I'm just trying to figure out how to behave without being able talk about it directly. It's also really interesting on many levels.

Comment author: David_Gerard 09 December 2010 11:44:28AM *  5 points [-]

If the forbidden topic would be made public (and people would believe it), it would result in a steep rise of donations towards the SIAI.

I really don't see how that follows. Will more of the public take it seriously? As I have noted, so far the reaction from people outside SIAI/LW has been "They did WHAT? Are they IDIOTS?"

The censoring of the information was in accordance with their goal of trying to prevent unfriendly artificial intelligence.

That doesn't make it not stupid or not counterproductive. Sincere stupidity is not less stupid than insincere stupidity. Indeed, sincere stupidity is more problematic in my experience as the sincere are less likely to back down, whereas the insincere will more quickly hop to a different idea.

Making the subject matter public did already harm some people

Citation needed.

and could harm people in future.

Citation needed.

Comment author: XiXiDu 09 December 2010 01:07:21PM 5 points [-]

Citation needed.

I sent you another PM.