Another month, another rationality quotes thread. The rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
In Blindsight, a "vampire" is a predatory, sociopathic genius built through genetic engineering. They have human brain mass but use it differently; take all the brain power we spend on self-awareness and channel it towards more processing power. The mission leader in Blindsight is a vampire, because he is more intelligent and able to make dispassionate decisions, but how do you check whether your vampire is right or even still on your side? Like Quirrelmort, they are always playing at least one level higher than you.
The synthesist quote is the first time Blindsight brings up the problem of what to do when you build smarter-than-human AI. The vampire quote approaches it from a different angle, with a smarter-than-human biological AI. Vampires present a trade-off: they cannot rewrite their source code, so they cannot have a hard takeoff, but you know they are less than friendly AI.
(If you know what is wrong with the above, please ROT13 your spoilers.)