Another month, another rationality quotes thread. The rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
Hmmm... a Bayesian optimization model will detect high values for a target function while remaining ignorant of very low ones. So I shouldn't trust it?
Your optimizer, whether Bayesian or not, needs to be able to recognize a low point when it hits one, or else it can't optimize at all! If every point looks the same... (It may learn more about high points, but it must still learn about low points.)