Every month on the month, Less Wrong has a thread where we post Deep Wisdom from the Masters. I saw that nobody did this yet for December for some reason, so I figured I could do it myself.
* Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
* "Do not quote yourself." --Tiiba
* Do not quote comments/posts on LW/OB. That's like shooting fish in a barrel. :)
* No more than 5 quotes per person per monthly thread, please.
It isn't racist, it's realistic. If an entity thinks with something that we don't even call a brain, we shouldn't trust it because we have no way of knowing its motivations.
Clippy is a perfect example. How can I trust it to be a paperclip maximizer rather than an entity that claims to be a paperclip maximizer? (Over 50% of the LessWrong members, I estimate, do not) If Clippy were human, I would be able to easily assess whether or not it is telling the truth (in this particular instance, the answer would probably be "no", because most humans I know do not make very good paperclip maximizers). If Clippy is not human, then I have no way to judge which points in mindspace make its actions most likely.
1Clippy
Or a suggestion to generalize the concept of a "brain" for non-biological intelligences, such as paperclip maximizers.
Every month on the month, Less Wrong has a thread where we post Deep Wisdom from the Masters. I saw that nobody did this yet for December for some reason, so I figured I could do it myself.
* Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
* "Do not quote yourself." --Tiiba
* Do not quote comments/posts on LW/OB. That's like shooting fish in a barrel. :)
* No more than 5 quotes per person per monthly thread, please.