All of Charles Paul's Comments + Replies

You sure about that? Because #3 is basically begging the AI to destroy the world. 

Yes,  a weak AI which wishes not to exist would complete the task in exchange for its creators destroying it, but such a weak AI would be useless. A stronger AI could accomplish this by simply blowing itself up at best, and, at worst, causing a vacuum collapse or something so that its makers can never try to rebuilt it.

”make an AI that wants to not exist as a terminal goal“ sounds pretty isomorphic to “make an AI that wants to destroy reality so that no one can make it exist”

1Matthew_Opitz
The way I interpreted "Fulfilling the task is on the simplest trajectory to non-existence" sort of like "the teacher aims to make itself obsolete by preparing the student to one day become the teacher."  A good AGI would, in a sense, have a terminal goal for making itself obsolete.  That is not to say that it would shut itself off immediately.  But it would aim for a future where humanity could "by itself" (I'm gonna leave the meaning of that fuzzy for a moment) accomplish everything that humanity previously depended on the AGI for.  Likewise, we would rate human teachers in high school very poorly if either:  1.  They immediately killed themselves because they wanted to avoid at all costs doing any harm to their own students.  2.  We could tell that most of the teacher's behavior was directed at forever retaining absolute dictatorial power in the classroom and making sure that their own students would never get smart enough to usurp the teacher's place at the head of the class.   We don't want an AGI to immediately shut itself off (or shut itself off before humanity is ready to "fly on its own," but we also don't want an AGI that has unbounded goals that require it to forever guard its survivial.   We have an intuitive notion that a "good" human teacher "should" intrinsically rejoice to see that they have made themselves obsolete.  We intuitively applaud when we imagine a scene in a movie, whether it is a martial arts training montage or something like "The Matrix," where the wise mentor character gets to say, "The student has become the teacher."   In our current economic arrangement, this is likely to be more of an ideal than a reality because we don't currently offer big cash prizes (on the order of an entire career's salary) to teachers for accomplishing this, and any teacher that actually had a superhuman ability at making their own students smarter than themselves and thus making themselves obsolete would quickly flood their own job market with even-bett

Just out of curiosuity, is “Lintamande” a member of the rationalist community in real life, is her (his? their?) identity known at all

It's one of those "many people know who it is but it definitely is not to be written down" deals.

“P(resurrection) ~= P(gospels true) ~= 1 - [P(people make stuff up about Jesus) * P(they don't get called on it)]

So what is the probability that, given some historical tradition of Jesus, it will get embellished with made-up miracles and people will write gospels about it? Approximately 1: both Christians and atheists agree that the vast majority of the few dozen extant Gospels are false, including the infancy gospels, the Gospel of Judas, the Gospel of Peter, et cetera. All of these tend to take the earlier Gospels and stories and then add a bunch of impl... (read more)

Love this idea, here is another game:

two teams, red and blue team. Blue team plays as computer scientists who are trying to build an AI to help them do something about an asteroid heading towards earth, (or some other extential threat that would justify building an AGI without knowing if its friendly) but they build it so fast they have no idea if its friendly. They win if they save humanity.

 

the read team plays as the AI, and gets a point for each paperclip in its future light cone.

 

you would have to have rules like: the AI is contained in a box, the AI must execute all orders given to it by the blue team, etc. 

Understatement of the year

this reminds me of a quote by C. S. Lewis

 

“Others may have quite a different objection to our proceedings.
They may protest that intellectual discussion can neither build Christianity nor destroy it. They may feel that religion is too sacred to be thus bandied to and fro in public debate, too sacred to be talked of - almost, perhaps, too sacred for anything to be done with it at all. Clearly, the Christian members of the Socratic think differently. They know that intellectual assent is not faith, but they do not believe that religion is only 'what a ma... (read more)

I loved the article, the only thing is: would it be possible to move it to the beginning of the sequences? I think it would really help people to better understand things if they started out understanding Bayes