betterthanwell comments on A Nightmare for Eliezer - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (74)
Hoax. There are no "AIs trying to be Friendly" with clueless creators. FAI is hard and http://lesswrong.com/lw/y3/value_is_fragile/.
Added: To arrive in an epistemic state where you are uncertain about your own utility function, but have some idea of which queries you need to perform against reality to resolve that uncertainty, and moreover, believe that these queries involve talking to Eliezer Yudkowsky, requires a quite specific and extraordinary initial state - one that meddling dabblers would be rather hard-pressed to accidentally infuse into their poorly designed AI.
So, would you hang up on BRAGI?
For what purpose (or circumstance) did you devise such a test?
Would you hang up if "BRAGI" passed your one-sentence test?
I assume that you must have devised the test before you arrived at this insight?
No. I'm not dumb, but I'm not stupid either.