You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Moss_Piglet comments on Open Thread, September 30 - October 6, 2013 - Less Wrong Discussion

4 Post author: Coscott 30 September 2013 05:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (295)

You are viewing a single comment's thread. Show more comments above.

Comment author: Moss_Piglet 06 October 2013 06:30:29PM 3 points [-]

What heuristic should someone building a new AI use to decide whether it's essential to talk with MIRI about it?

Why would they talk to MIRI about it at all?

They're the ones with the actual AI expertise, having built the damn thing in the first place, and have the most to lose from any collaboration (the source code of a commercial or military grade AI is a very valuable secret). Furthermore, it's far from clear that there is any consensus in the AI community about the likelihood of a technological singularity (especially the subset which FOOMs belong to) and associated risks. From their perspective, there's no reason to pay MIRI any attention at all, much less bring them in as consultants.

If you think that MIRI ought to be involved in those decisions, maybe first articulate what benefit the AI researchers would gain from collaboration in terms that would be reasonable to someone who doesn't already accept any of the site dogmas or hold EY in any particular regard.

Comment author: ChristianKl 07 October 2013 01:48:42PM 0 points [-]

If you think that MIRI ought to be involved in those decisions

As far as I understand that's MIRI's position that they ought to be involved when dangerous things might happen.

maybe first articulate what benefit the AI researchers would gain from collaboration in terms that would be reasonable to someone who doesn't already accept any of the site dogmas or hold EY in any particular regard.

But what goes for someone who does accept the site dogma's in principle but still does some work in AI.

Comment author: Moss_Piglet 07 October 2013 02:39:16PM 1 point [-]

But what goes for someone who does accept the site dogma's in principle but still does some work in AI.

I'm sorry, I didn't get much sleep last night, but I can't parse this sentence at all. Could you rephrase it for me?