I of course agree that the thought of being able to read the AI's mind is ridiculous.
It's not transparently obvious to me why this would be "ridiculous", care to enlighten me? Building an AI at all seems ridiculous to many people, but that's because they don't actually think about the issue because they've never encountered it before. It really seems far more ridiculous to me that we shouldn't even try to read the AIs mind, when there's so much at stake.
AIs aren't Gods, with time and care and lots of preparation reading their thoughts should be doable. If you disagree with that statement, please explain why? Rushing things here seems like the most awful idea possible, I really think it would be worth the resource investment.
AIs aren't Gods, with time and care and lots of preparation reading their thoughts should be doable.
Humans reading computer code aren't gods either. How long until an uFAI would get caught if it did stuff like this?
Here's the new thread for posting quotes, with the usual rules: