You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

XiXiDu comments on OPERA Confirms: Neutrinos Travel Faster Than Light - Less Wrong Discussion

10 Post author: XiXiDu 18 November 2011 09:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (63)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 18 November 2011 03:09:13PM *  3 points [-]

...recursive self-improvement doesn't in any obvious way require changing our understanding of the laws of physics.

Some people think that complexity issues are even more fundamental than the laws of physics. On what basis do people believe that recursive self-improvement would be uncontrollably fast? It is simply easy to believe because it is a vague concept and none of those people have studied the relevant math. The same isn't true for FTL phenomena because many people are aware of how unlikely that possibility is.

The same people who are very skeptical in the case of faster than light neutrinos just make up completely unfounded probability estimates about the risks associated with recursive self-improvement because it is easy to do so, because there is no evidence either way.

Comment author: JoshuaZ 18 November 2011 03:18:22PM *  5 points [-]

Some people think that complexity issues are even more fundamental than the laws of physics.

Sure. And I'm probably one of the people here who is most vocal about computational complexity issues limiting what recursive self-improvement can do. But even then, I don't see them as necessarily in the same category. Keep in mind, {L, P, NP, co-NP, PSPACE, EXP} being all distinct are conjectural claims. We can't even prove that L != NP at this point. And in order for this to produce barriers to recursive self-improvement one would likely need even stronger claims.

The same people who are very skeptical in the case of faster than light neutrinos just make up completely unfounded probability estimates about the risks associated with recursive self-improvement because it is easy to do so, because there is no evidence either way.

Well, but that's not an unreasonable position. If I don't have strong evidence either way on a question I should move my estimates close to 50%, That's in contrast to the FTL issue where we have about a hundred years worth of evidence all going in one direction, and that evidence includes other observations involving neutrinos.

Comment author: Cyan 20 November 2011 02:54:49AM 3 points [-]

SN_1987 shows that neutrinos travel at the speed of light almost all of the time but does not rule out that they might have velocities that exceed that of light very briefly at the moment they're generated. See here for more. Note that I, like the author of the post I've linked, do not believe that this finding will stand up. It's just that if it does stand up, it will be because the constant velocity assumption is wrong.

Comment author: XiXiDu 18 November 2011 03:35:10PM 1 point [-]

If I don't have strong evidence either way on a question I should move my estimates close to 50%...

That would be more than enough to devote a big chunk of the world's resources on friendly AI research, given the associated utility. But you can't just make up completely unfounded conjectures, then claim that we don't have evidence either way but that the utility associated with a negative outcome is huge and we should therefore take it seriously. Because that reasoning will ultimately make you privilege random high-utility outcomes over theories based on empirical evidence.

Comment author: shin_getter 18 November 2011 03:49:38PM 0 points [-]

Throwing out a theory as powerful and successful as relativity would require very powerful evidence, and at this point the evidence doesn't fall that way at all.

On the other hand, the lower bound for GAI becoming a very serious problem is very low. Simply by dropping the price of peak human intelligence down to material and energy costs of a human (break no laws unless one hold the mind is amaterial) would result in massive social displacement that would require serious planning beforehand. I don't think it is very likely that we'd see an AI that can laugh at exp-space problems, but all it needs to be is to be too smart to be easily controlled to mess everything up.