Manfred comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 11 May 2012 09:29:07AM *  1 point [-]

Developing a new decision theory sounds like it requires more mathematical talent than the SI yet has available.

On what measure of difficulty are you basing this? We have some guys around here doing a pretty good job.

Comment author: hairyfigment 11 May 2012 06:25:41PM 1 point [-]

I phrased that with too much certainty. While I have little if any reason to see fully-reflective decision theory as an easier task than self-consistent infinite set theory, I also have no clear reason to think the contrary.

But I'm trying to find the worst scenario that we could plan for. I can think of two broad ways that Eliezer's current plan could be horribly misguided:

  1. if it works well enough to help someone produce an uFAI but not well enough to stop this in time
  2. if some part of it -- such as a fully-reflective decision theory that humans can understand -- is mathematically impossible, and SI never realizes this.

Now SI technically seems aware of both problems. The fact that Eliezer went out of his way to help critics understand Löb's Theorem and that he keeps mentioning said theorem seems like a good sign. But should I believe that SI is doing enough to address #2? Why?