Nick_Tarleton comments on Best career models for doing research? - Less Wrong

27 Post author: Kaj_Sotala 07 December 2010 04:25PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (999)

You are viewing a single comment's thread. Show more comments above.

Comment author: Nick_Tarleton 08 December 2010 05:59:11PM *  4 points [-]

Roko's reply to me strongly suggested that he interpreted my message as requesting deletion, and that I was the cause of him deleting it. I doubt anyone at SIAI would have explicitly requested deletion.

Comment author: FormallyknownasRoko 08 December 2010 06:09:05PM 5 points [-]

I can confirm that I was not asked to delete the comment but did so voluntarily.

Comment author: Vladimir_Nesov 08 December 2010 07:47:02PM 7 points [-]

I think you are too trigger-happy.

Comment author: Perplexed 08 December 2010 06:10:43PM 2 points [-]

I'm wondering whether you, Nick, have learned anything from this experience - something perhaps about how attempting to hide something is almost always counterproductive?

Of course, Roko contributed here by deleting the message, you didn't create this mess by yourself. But you sure have helped. :)

Comment author: FormallyknownasRoko 08 December 2010 06:12:53PM *  9 points [-]

Well, look, I deleted it of my own accord, but only after being prompted that it was a bad thing to have posted. Can we just drop this? It makes me look like even more of a troublemaker than I already look like, and all I really want to do is finish the efficient charity competition then get on with life outside teh intenetz.

Comment author: XiXiDu 09 December 2010 01:59:05PM 5 points [-]

It makes me look like even more of a troublemaker...

How so? I've just reread some of your comments on your now deleted post. It looks like you honestly tried to get the SIAI to put safeguards into CEV. Given that the idea spread to many people by now, don't you think it would be acceptably to discuss the matter before one or more people take it serious or even consider to implement it deliberately?

Comment author: FormallyknownasRoko 09 December 2010 06:30:21PM 0 points [-]

I don't think it is a good idea to discuss it. I think that the costs outweigh the benefits. The costs are very big. Benefits marginal.

Comment author: XiXiDu 09 December 2010 02:03:26PM 7 points [-]

Will you at least publicly state that you precommit, on behalf of CEV, to not apply negative incentives in this case? (Roko, Jul 24, 2010 1:37 PM)

This is very important. If the SIAI is the organisation to solve the friendly AI problem and implement CEV then it should be subject to public examination, especially if they ask for money.

Comment author: David_Gerard 09 December 2010 02:32:05PM *  6 points [-]

The current evidence that anyone anywhere can implement CEV is two papers in six years that talk about it a bit. There appears to have been nothing else from SIAI and no-one else in philosophy appears interested.

If that's all there is for CEV in six years, and AI is on the order of thirty years away, then (approximately) we're dead.

This is rather disappointing, as if CEV is possible then a non-artificial general intelligence should be able to implement it, at least partially. And we have those. The reason for CEV is (as I understand it) the danger of the AI going FOOM before it cares about humans. However, human general intelligences don't go FOOM but should be able to do the work for CEV. If they know what that work is.

Addendum: I see others have been asking "but what do you actually mean?" for a couple of years now.

Comment author: Nick_Tarleton 09 December 2010 05:34:45PM *  7 points [-]

The current evidence that anyone anywhere can implement CEV is two papers in six years that talk about it a bit. There appears to have been nothing else from SIAI and no-one else in philosophy appears interested.

If that's all there is for CEV in six years, and AI is on the order of thirty years away, then (approximately) we're dead.

This strikes me as a demand for particular proof. SIAI is small (and was much smaller until the last year or two), the set of people engaged in FAI research is smaller, Eliezer has chosen to focus on writing about rationality over research for nearly four years, and FAI is a huge problem, in which any specific subproblem should be expected to be underdeveloped at this early stage. And while I and others expect work to speed up in the near future with Eliezer's attention and better organization, yes, we probably are dead.

The reason for CEV is (as I understand it) the danger of the AI going FOOM before it cares about humans.

Somewhat nitpickingly, this is a reason for FAI in general. CEV is attractive mostly for moving as much work from the designers to the FAI as possible, reducing the potential for uncorrectable error, and being fairer than letting the designers lay out an object-level goal system.

This is rather disappointing, as if CEV is possible then a non-artificial general intelligence should be able to implement it, at least partially.... However, human general intelligences don't go FOOM but should be able to do the work for CEV. If they know what that work is.

This sounds interesting; do you think you could expand?

Comment author: David_Gerard 09 December 2010 05:40:33PM *  1 point [-]

This strikes me as a demand for particular proof.

It wasn't intended to be - more incredulity. I thought this was a really important piece of the puzzle, so expected there'd be something at all by now. I appreciate your point: that this is a ridiculously huge problem and SIAI is ridiculously small.

However, human general intelligences don't go FOOM but should be able to do the work for CEV. If they know what that work is.

This sounds interesting; do you think you could expand?

I meant that, as I understand it, CEV is what is fed to the seed AI. Or the AI does the work to ascertain the CEV. It requires an intelligence to ascertain the CEV, but I'd think the ascertaining process would be reasonably set out once we had an intelligence on hand, artificial or no. Or the process to get to the ascertaining process.

I thought we needed the CEV before the AI goes FOOM, because it's too late after. That implies it doesn't take a superintelligence to work it out.

Thus: CEV would have to be a process that mere human-level intelligences could apply. That would be a useful process to have, and doesn't require first creating an AI.

I must point out that my statements on the subject are based in curiosity, ignorance and extrapolation from what little I do know, and I'm asking (probably annoyingly) for more to work with.

Comment author: Nick_Tarleton 09 December 2010 05:47:30PM *  4 points [-]

"CEV" can (unfortunately) refer to either CEV the process of determining what humans would want if we knew more etc., or the volition of humanity output by running that process. It sounds to me like you're conflating these. The process is part of the seed AI and is needed before it goes FOOM, but the output naturally is neither, and there's no guarantee or demand that the process be capable of being executed by humans.

Comment author: David_Gerard 09 December 2010 06:00:08PM *  1 point [-]

OK. I still don't understand it, but I now feel my lack of understanding more clearly. Thank you!

(I suppose "what do people really want?" is a large philosophical question, not just undefined but subtle in its lack of definition.)

Comment author: FormallyknownasRoko 09 December 2010 06:19:31PM *  5 points [-]

I have recieved assurances that SIAI will go to significant efforts not to do nasty things, and I believe them. Private assurances given sincerely are, in my opinion, the best we can hope for, and better than we are likely to get from any other entity involved in this.

Besides, I think that XiXiDu, et al are complaining about the difference between cotton and silk, when what is actually likely to happen is more like a big kick in the teeth from reality. SIAI is imperfect. Yes. Well done. Nothing is perfect. At least cut them a bit of slack.

Comment author: timtyler 09 December 2010 06:32:19PM *  2 points [-]

I have recieved assurances that SIAI will go to significant efforts not to do nasty things, and I believe them. Private assurances given sincerely are the best we can hope for, and better than we are likely to get from any other entity involved in this.

What?!? Open source code - under a permissive license - is the traditional way to signal that you are not going to run off into the sunset with the fruits of a programming effort. Private assurances are usually worth diddly-squat by comparison.

Comment author: FormallyknownasRoko 09 December 2010 06:34:02PM *  1 point [-]

I think that you don't realize just how bad the situation is. You want that silken sheet. Rude awakening methinks. Also open-source not neccessarily good for FAI in any case.

Comment author: XiXiDu 09 December 2010 07:26:58PM 4 points [-]

I think that you don't realize just how bad the situation is.

I don't think that you realize how bad it is. I'd rather have the universe being paperclipped than supporting the SIAI if that means that I might be tortured for the rest of infinity!

Comment author: Eliezer_Yudkowsky 09 December 2010 07:44:55PM 15 points [-]

To the best of my knowledge, SIAI has not planned to do anything, under any circumstances, which would increase the probability of you or anyone else being tortured for the rest of infinity.

Supporting SIAI should not, to the best of my knowledge, increase the probability of you or anyone else being tortured for the rest of infinity.

Thank you.

Comment author: XiXiDu 09 December 2010 07:52:43PM *  5 points [-]

But imagine there was a person a level above yours that went to create some safeguards for an AGI. That person would tell you that you can be sure that the safeguards s/he plans to implement will benefit everyone. Are you just going to believe that? Wouldn't you be worried and demand that their project is being supervised?

You are in a really powerful position because you are working for an organisation that might influence the future of the universe. Is it really weird to be skeptical and ask for reassurance of their objectives?

Comment author: [deleted] 10 December 2010 08:43:08PM 2 points [-]

Currently, there are no entities in physical existence which, to my knowledge, have the ability to torture anyone for the rest of eternity.

You intend to build an entity which would have that ability (or if not for infinity, for a googolplex of subjective years).

You intend to give it a morality based on the massed wishes of humanity - and I have noticed that other people don't always have my best interests at heart. It is possible - though unlikely - that I might so irritate the rest of humanity that they wish me to be tortured forever.

Therefore, you are, by your own statements, raising the risk of my infinite torture from zero to a tiny non-zero probability. It may well be that you are also raising my expected reward enough for that to be more than counterbalanced, but that's not what you're saying - any support for SIAI will, unless I'm completely misunderstanding, raise the probability of infinite torture for some individuals.

Comment author: timtyler 09 December 2010 08:09:39PM *  -2 points [-]

Also open-source not neccessarily good for FAI in any case.

You can have your private assurances - and I will have my open-source software.

Gollum gave his private assurances to Frodo - and we all know how that turned out.

If someone solicits for you to "trust in me", alarm bells should start ringing immediately. If you really think that is "the best we can hope for", then perhaps revisit that.

Comment author: wedrifid 09 December 2010 08:40:58PM 9 points [-]

Gollum gave his private assurances to Frodo - and we all know how that turned out.

Well I'm convinced. Frodo should definitely have worked out a way to clone the ring and made sure the information was available to all of Middle Earth. You can never have too many potential Ring-Wraiths.

Comment author: [deleted] 09 December 2010 08:44:47PM 2 points [-]

Suddenly I have a mental image of "The Lord of the Rings: The Methods of Rationality."

Comment author: jimrandomh 09 December 2010 08:29:13PM 4 points [-]

Open source AGI is not a good thing. In fact, it would be a disastrously bad thing. Giving people the source code doesn't just let them inspect it for errors, it also lets them launch it themselves. If you get an AGI close to ready for launch, then sharing its source code means that instead of having one party to decide whether there are enough safety measures ready to launch, you have many parties individually deciding whether to launch it themselves, possibly modifying its utility function to suit their own whim, and the hastiest party's AGI wins.

Ideally, you'd want to let people study the code, but only trustworthy people, and in a controlled environment where they can't take the source code with them. But even that is risky, since revealing that you have an AGI makes you a target for espionage and attack by parties who shouldn't be trusted with humanity's future.

Comment author: timtyler 09 December 2010 08:38:44PM *  0 points [-]

Actually it reduces the chance of any party drawing massively ahead of the rest. It acts as an equalising force, by power-sharing. Since one of the main things we want to avoid is a disreputable organisation using machine intelligence to gain an advantage - and sustaining it over a long period of time. Using open-source software helps to defend against that possibility.

Machine intelligence will be a race - but it will be a race, whether participants share code or not.

Having said all that, machine intelligence protected by patents with secret source code on a server somewhere does seem like a reasonably probable outcome.

Comment author: Perplexed 08 December 2010 06:39:52PM 3 points [-]

Ok by me. It is pretty obvious by this point that there is no evil conspiracy involved here. But I think the lesson remains, I you delete something, even if it is just because you regret posting it, you create more confusion than you remove.

Comment author: waitingforgodel 09 December 2010 04:18:40AM 2 points [-]

I think the question you should be asking is less about evil conspiracies, and more about what kind of organization SIAI is -- what would they tell you about, and what would they lie to you about.

Comment author: XiXiDu 09 December 2010 10:54:12AM *  4 points [-]

If the forbidden topic would be made public (and people would believe it), it would result in a steep rise of donations towards the SIAI. That alone is enough to conclude that the SIAI is not trying to hold back something that would discredit it as an organisation concerned with charitable objectives. The censoring of the information was in accordance with their goal of trying to prevent unfriendly artificial intelligence. Making the subject matter public did already harm some people and could harm people in future.

Comment author: David_Gerard 09 December 2010 11:04:29AM *  7 points [-]

But the forbidden topic is already public. All the effects that would follow from it being public would already follow. THE HORSE HAS BOLTED. It's entirely unclear to me what pretending it hasn't does for the problem or the credibility of the SIAI.

Comment author: XiXiDu 09 December 2010 11:14:11AM 3 points [-]

It is not as public as you think. If it was then people like waitingforgodel wouldn't ask about it.

I'm just trying to figure out how to behave without being able talk about it directly. It's also really interesting on many levels.

Comment author: wedrifid 09 December 2010 11:33:07AM *  8 points [-]

It is not as public as you think.

Rather more public than a long forgotten counterfactual discussion collecting dust in the blog's history books would be. :P

Comment author: David_Gerard 09 December 2010 02:01:21PM *  3 points [-]

It is not as public as you think.

Rather more public than a long forgotten counterfactual discussion collecting dust in the blog's history books would be. :P

Precisely. The place to hide a needle is in a large stack of needles.

The choice here was between "bad" and "worse" - a trolley problem, a lose-lose hypothetical - and they appear to have chosen "worse".

Comment deleted 09 December 2010 11:19:57AM *  [-]
Comment author: XiXiDu 09 December 2010 11:26:37AM 2 points [-]

Yeah, I thought about that as well. Trying to suppress it made it much more popular and gave it a lot of credibility. If they decided to act in such a way deliberately, that be fascinating. But that sounds like one crazy conspiracy theory to me.

Comment author: David_Gerard 09 December 2010 11:44:28AM *  5 points [-]

If the forbidden topic would be made public (and people would believe it), it would result in a steep rise of donations towards the SIAI.

I really don't see how that follows. Will more of the public take it seriously? As I have noted, so far the reaction from people outside SIAI/LW has been "They did WHAT? Are they IDIOTS?"

The censoring of the information was in accordance with their goal of trying to prevent unfriendly artificial intelligence.

That doesn't make it not stupid or not counterproductive. Sincere stupidity is not less stupid than insincere stupidity. Indeed, sincere stupidity is more problematic in my experience as the sincere are less likely to back down, whereas the insincere will more quickly hop to a different idea.

Making the subject matter public did already harm some people

Citation needed.

and could harm people in future.

Citation needed.

Comment author: XiXiDu 09 December 2010 01:07:21PM 5 points [-]

Citation needed.

I sent you another PM.

Comment author: David_Gerard 09 December 2010 01:49:36PM *  4 points [-]

Hmm, okay. But that, I suggest, appears to have been a case of reasoning oneself stupid.

It does, of course, account for SIAI continuing to attempt to secure the stable doors after the horse has been dancing around in a field for several months taunting them with "COME ON IF YOU THINK YOU'RE HARD ENOUGH."

(I upvoted XiXiDu's comment here because he did actually supply a substantive response in PM, well deserving of a vote, and I felt this should be encouraged by reward.)