You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Kaj_Sotala comments on Leaving LessWrong for a more rational life - Less Wrong Discussion

33 [deleted] 21 May 2015 07:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (268)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kaj_Sotala 23 May 2015 09:11:36PM 2 points [-]

The work MIRI is choosing for itself is self-isolating

AFAIK, part of why the technical agenda contains the questions it does is that they're problems that are of interest to people to mathematicians and logicians even if those people aren't interested in AI risk. (Though of course, that doesn't mean that AI researchers would be interested in that work, but it's at least still more connecting with the academic community than "self-isolating" would imply.)

Comment author: jacob_cannell 24 May 2015 06:26:54AM 1 point [-]

AFAIK, part of why the technical agenda contains the questions it does is that they're problems that are of interest to people to mathematicians and logicians even if those people aren't interested in AI risk.

This is concerning if true - the goal of the technical agenda should be to solve AI risk, not appeal to mathematicians and logicians (by say making them feel important).

Comment author: Kaj_Sotala 24 May 2015 10:03:15AM 3 points [-]

That sounds like an odd position to me. IMO, getting as many academics from other fields as possible working on the problems is essential if one wants to make maximal progress on them.

Comment author: [deleted] 24 May 2015 05:42:28PM *  1 point [-]

The academic field which is most conspicuously missing is artificial intelligence. I agree with Jacob that it is and should be concerning that the machine intelligence research institute has adopted a technical agenda which is non-inclusive of machine intelligence researchers.

Comment author: Kaj_Sotala 24 May 2015 11:18:20PM *  2 points [-]

I agree with Jacob that it is and should be concerning

That depends on whether you believe that machine intelligence researchers are the people who are currently the most likely to produce valuable progress on the relevant research questions.

One can reasonably disagree on MIRI's current choices about their research program, but I certainly don't think that their choices are concerning in the sense of suggesting irrationality on their part. (Rather the choices only suggest differing empirical beliefs which are arguable, but still well within the range of non-insane beliefs.)

Comment author: [deleted] 26 May 2015 06:23:12PM 3 points [-]

On the contrary, my core thesis is that AI risk advocates are being irrational. It's implied in the title of the post ;)

Specifically I think they are arriving at their beliefs via philosophical arguments about the nature of intelligence which are severely lacking in empirical data, and then further shooting themselves in the foot by rationalizing reasons to not pursue empirical tests. Taking a belief without evidence, and then refusing to test that belief empirically--I'm willing to call a spade a spade: that is most certainly irrational.

Comment author: jacob_cannell 27 May 2015 03:38:07PM 0 points [-]

That's a good summary of your post.

I largely agree, but to be fair we should consider that MIRI started working on AI safety theory long before the technology required for practical experimentation with human-level AGI - to do that you need to be close to AGI in the first place.

Now that we are getting closer, the argument for prioritizing experiments over theory becomes stronger.

Comment author: jacob_cannell 27 May 2015 03:48:03PM 0 points [-]

There are many types of academics - does your argument extend to french literature experts?

Clearly, if there is a goal behind the technical agenda, changing the technical agenda to appeal to certain groups detracts from that goal. You could argue that enlisting the help of mathematicians and logicians is so important it justifies changing the agenda ... but I doubt there is much historical support for such a strategy.

I suspect part of the problem is that the types of researchers/academics which could most help (machine learning, statistics, comp sci types) are far too valuable to industry and thus are too expensive for non-profits such as MIRI.

Comment author: Kaj_Sotala 02 June 2015 01:03:38PM 0 points [-]

There are many types of academics - does your argument extend to french literature experts?

Well, if MIRI happened to know of technical problems they thought were relevant for AI safety and which they thought French literature experts could usefully contribute to, sure.

I'm not suggesting that they would have taken otherwise uninteresting problems and written those up simply because they might be of interest to mathematicians. Rather my understanding is that they had a set of problems that seemed about equally important, and then from that set, used "which ones could we best recruit outsiders to help with" as an additional criteria. (Though I wasn't there, so anything I say about this is at best a combination of hearsay and informed speculation.)