Another month, another rationality quotes thread. The rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
I found the "open channel" metaphor obscure from just the quote, and found some context. The open channel is a contrast to the blind alley of seizing to a single belief that may be wrong.
I noticed that later in the passage, he says:
This doesn't sit well with dreams of making a superintelligent FAI that will be the last invention we ever need make, after which we will have attained the perfect life for everyone always.
It sits well with FAI, but poorly with assuming that FAI will instantly or automatically make everything perfect. The warning is against assuming a particular theory must be true, or a particular action must be optimal. Presumeably good advice for the AI as well, at least as it is "growing up" (recursively self-improving).