Has Eliezer explained somewhere (hopefully on a web page) why he doesn't want to post a transcript of a successful AI-box experiment?
Have the successes relied on a meta-approach, such as saying, "If you let me out of the box in this experiment, it will make people take the dangers of AI more seriously and possibly save all of humanity; whereas if you don't, you may doom us all"?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I'm unclear whether you're saying that we perceive those in power to be corrupt, or that they actually are corrupt. The beginning focuses on the former; the second half, on the latter.
The idea that we have evolved to perceive those in power over us as being corrupt faces the objection that the statement, "Power corrupts", is usually made upon observing all known history, not just the present.