You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gedymin comments on Superintelligence 15: Oracles, genies and sovereigns - Less Wrong Discussion

6 Post author: KatjaGrace 23 December 2014 02:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (30)

You are viewing a single comment's thread. Show more comments above.

Comment author: gedymin 23 December 2014 08:30:02PM *  1 point [-]

It's ok, as long as the talking is done in sufficiently rigorous manner. By an analogy, a lot of discoveries in theoretical physics have been made long before they could be experimentally supported. Theoretical CS also has good track record here, for example, the first notable quantum algorithms were discovered long before the first notable quantum computers were built. Furthermore, the theory of computability mostly talks about the uncomputable (computations that cannot be realized and devices that cannot be built in this universe), so has next to no practical applications. It just so happened that many of the ideas and methods developed for CT also turned out to be useful for its younger sister - the theory of algorithmic complexity, which has enormous practical importance.

In short, I feel that the quality of an academic inquiry is more dependent on its methods and results than on its topic.

Comment author: William_S 24 December 2014 03:37:05AM 2 points [-]

To have rigorous discussion, one thing we need is clear models of the thing that we are talking about (ie, for computability, we can talk about Turing machines, or specific models of quantum computers). The level of discussion in Superintelligence still isn't at the level where the mental models are fully specified, which might be where disagreement in this discussion is coming from. I think for my mental model I'm using something like the classic tree search based chess playing AI, but with a bunch of unspecified optimizations that let it do useful search in large space of possible actions (and the ability to reason about and modify it's own source code). But it's hard to be sure that I'm not sneaking in some anthropomorphism into my model, which in this case is likely to lead one quickly astray.