You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Sebastian_Hagen comments on Superintelligence 5: Forms of Superintelligence - Less Wrong Discussion

12 Post author: KatjaGrace 14 October 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (112)

You are viewing a single comment's thread. Show more comments above.

Comment author: Sebastian_Hagen 14 October 2014 08:55:49PM *  3 points [-]

Turing machine (very slowly), which we know/suspect can do everything computable. It would seem then that a quality superintelligence is just radically faster than a human at these problems.

I think that statement is misleading, here. To solve a real-world problem on a TM, you do need to figure out an algorithm that solves your problem. If a Dark Lord showed up and handed me a (let's say ridiculously fast compared to any computer realizable on what looks like our physics) UTM - and I then gave that UTM to a monkey - the monkey may have a fairly good idea of what it'd want (unlimited bananas! unlimited high-desirability sex partners!), but it wouldn't have any idea of how to use the UTM to get it.

If I tried to use that UTM myself, my chances would probably be better - I can think of some interesting and fairly safe uses to put a powerful computer to - but it still wouldn't easily allow me to change everything I'd want changed in this world, or even give me an easy way to come up with a really good strategy to doing so. In the end, my mental limits on how to decide on algorithms to deal with specific real-world issues would still be very relevant.