You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

XiXiDu comments on Open Thread, March 16-31, 2012 - Less Wrong Discussion

2 Post author: OpenThreadGuy 16 March 2012 04:53AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (114)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 16 March 2012 11:01:47AM 1 point [-]

In Engelbart As UberTool? Robin Hanson talks about a dude who actually tried to apply recursive self-improvement to his company. He is till trying (wow!).

It seems humans, even groups of humans, are not capable of fast recursive self-improvement. That they didn't take over the world might be partly due to strong competition from other companies that are constantly trying the same.

What is it that is missing that doesn't allow one of them to prevail?

Robin Hanson further asks what would have been a reasonable probability estimate to assign to the possibility of a company taking over the world at that time.

I have no idea how I could possible assign a number to that. I would just have said that it is unlikely enough to be ignored. Or that there is not enough data to make a reasonable guess either way. I don't have the resources to take every idea seriously and assign a probability estimate to it. Some things get just discounted by my intuitive judgment.

Comment author: Viliam_Bur 16 March 2012 04:48:59PM 1 point [-]

It seems humans, even groups of humans, are not capable of fast recursive self-improvement. What is it that is missing that doesn't allow one of them to prevail?

I would guess that the reason is people don't work with exact numbers, only with approximations. If you make a very long equation, the noise kills the signal. In mathematics, if you know "A = B" and "B = C" and "C = D", you can conclude that "A = D". In real life your knowledge is more like "so far it seems to me that under usual conditions A is very similar to B". A hypothetical perfect Bayesian could perhaps assign some probability and work with it, but even our estimates of probabilities are noisy. Also, the world is complex, things do not add to each other linearly.

I suspect that when one tries to generalize, one gets a lot of general rules with maybe 90% probabilities. Try to chain dozen of them together, and the result is pathetic. It is like saying "give me a static point and a lever and I will move the world" only to realize that your lever is too floppy and you can't move anything that is too far and heavy.