You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Punoxysm comments on Steelmanning MIRI critics - Less Wrong Discussion

6 Post author: fowlertm 19 August 2014 03:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

You are viewing a single comment's thread. Show more comments above.

Comment author: Punoxysm 19 August 2014 06:32:13PM *  4 points [-]

All good points.

I'd focus on #4 as the primary point. Focusing on theoretical safety measures far ahead of the development of the technology to be made safe is very difficult and has no real precedent in previous engineering efforts. In addition, MIRI's specific program isn't heading in a clear direction and hasn't gotten a lot of traction in the mainstream AI research community yet.

Edit: Also, hacks and heuristics are so vital to human cognition in every domain, that it seems clear that general computation models like AIXI don't show the roadmap to AI, despite their theoretical niceness.

Comment author: Benito 20 August 2014 02:45:23PM 1 point [-]

For a great-if-imprecise response to #4, you can just read aloud the single page story at the beginning of Bostrom's book 'Superintelligence'. For a more precise response, you can make explicit the analogy.

Comment author: whpearson 23 August 2014 01:27:43PM 1 point [-]

And if they come back with an snake egg instead? Surely we need to have some idea of the nature of AI and it thus how exactly it is dangerous.

Comment author: Punoxysm 20 August 2014 11:10:35PM 1 point [-]

Can you summarize what you mean or link to the excerpt?

And more precisely: Imagine if Roentgen had tried to come up with safety protocols for nuclear energy. He would simply have been far too early to possibly do so. Similarly, we are far too early in the development of AI to meaningfully make it safer, and MIRI's program as it exists doesn't convince me otherwise.

Comment author: Nornagest 21 August 2014 12:00:23AM *  4 points [-]

From the Wikipedia article on Roentgen:

It is not believed his carcinoma was a result of his work with ionizing radiation because of the brief time he spent on those investigations, and because he was one of the few pioneers in the field who used protective lead shields routinely.

Sounds like he was doing something right.

Comment author: Benito 21 August 2014 11:30:55AM 1 point [-]

My apologies for not being clear on two counts. Here is the relevant passage. And the analogy referred to in my previous comment was the one between Bostrom's story and AI.