You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Aiyen comments on Rationality Quotes October 2014 - Less Wrong Discussion

4 Post author: Tyrrell_McAllister 01 October 2014 11:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (236)

You are viewing a single comment's thread.

Comment author: Aiyen 05 October 2014 09:02:02PM 3 points [-]

"If we take everything into account — not only what the ancients knew, but all of what we know today that they didn't know — then I think that we must frankly admit that we do not know. But, in admitting this, we have probably found the open channel."

Richard Feynman, "The Value of Science," public address at the National Academy of Sciences (Autumn 1955); published in What Do You Care What Other People Think (1988); republished in The Pleasure of Finding Things Out: The Best Short Works of Richard P. Feynman (1999) edited by Jeffrey Robbins.

Comment author: RichardKennaway 06 October 2014 08:29:20AM *  5 points [-]

I found the "open channel" metaphor obscure from just the quote, and found some context. The open channel is a contrast to the blind alley of seizing to a single belief that may be wrong.

I noticed that later in the passage, he says:

It is our responsibility to leave the men of the future with a free hand. In the impetuous youth of humanity, we can make grave errors that can stunt our growth for a long time. This we will do if we, so young and ignorant, say we have the answers now, if we suppress all discussion, all criticism, saying, 'This is it, boys! Man is saved!' Thus we can doom man for a long time to the chains of authority, confined to the limits of our present imagination. It has been done so many times before.

This doesn't sit well with dreams of making a superintelligent FAI that will be the last invention we ever need make, after which we will have attained the perfect life for everyone always.

Comment author: Vaniver 06 October 2014 05:57:59PM *  4 points [-]

This doesn't sit well with dreams of making a superintelligent FAI that will be the last invention we ever need make, after which we will have attained the perfect life for everyone always.

Indeed, but it does agree with the argument for the importance of not getting AI wrong in a way that does chain the future.

Comment author: Aiyen 06 October 2014 08:40:27PM 1 point [-]

It sits well with FAI, but poorly with assuming that FAI will instantly or automatically make everything perfect. The warning is against assuming a particular theory must be true, or a particular action must be optimal. Presumeably good advice for the AI as well, at least as it is "growing up" (recursively self-improving).