Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ImmortalRationalist 19 August 2017 02:11:10AM 0 points [-]

On a related question, if Unfriendly Artificial Intelligence is developed, how "unfriendly" is it expected to be? The most plausible sounding outcome may be human extinction. The worst case scenario could be if the UAI actively tortures humanity, but I can't think of many scenarios in which this would occur.

Comment author: ImmortalRationalist 30 July 2017 10:13:09AM 0 points [-]

Eliezer Yudkowsky wrote this article a while ago, which basically states that all knowledge boils down to 2 premises: That "induction works" has a sufficiently large prior probability, and that there exists some single large ordinal that is well-ordered.

Comment author: Turgurth 22 July 2017 05:09:51PM 1 point [-]

I saw this same query in the last open thread. I suspect you aren't getting any responses because the answer is long and involved. I don't have time to give you the answer in full either, so I'll give you the quick version:

I am in the process of signing up with Alcor, because after ten years of both observing cryonics organizations myself and reading what other people say about them, Alcor has given a series of cues that they are the more professional cryonics organization.

So, the standard advice is: if you are young, healthy with a long life expectancy, and are not wealthy, choose C.I., because they are less expensive. If those criteria do not apply to you, choose Alcor, as they appear to be the more serious, professional organization.

In other words: choose C.I. as the type of death insurance you want to have, but probably won't use, or choose Alcor as the type of death insurance you probably will use.

Comment author: ImmortalRationalist 24 July 2017 02:10:58PM 1 point [-]

If you are young, healthy, and have a long life expectancy, why should you choose CI? In the event that you die young, would it not be better to go with the one that will give you the best chance of revival?

Comment author: AlexMennen 20 July 2017 12:14:20AM 0 points [-]

Can anyone point me to any good arguments for, or at least redeeming qualities of, Integrated Information Theory?

Comment author: ImmortalRationalist 21 July 2017 03:34:36AM 0 points [-]

Not sure how relevant this is to your question, but Eliezer wrote this article on why philosophical zombies probably don't exist.

Comment author: drethelin 20 July 2017 05:13:12PM 1 point [-]

You can justify a belief in "Induction works" by induction over your own life.

Comment author: ImmortalRationalist 21 July 2017 03:32:30AM 0 points [-]

Explain. Are you saying that since induction appears to work in your everyday like, this is Bayesian evidence that the statement "Induction works" is true? This has a few problems. The first problem is that if you make the prior probability sufficiently small, it cancels out any evidence you have for the statement being true. To show that "Induction works" has at least a 50% chance of being true, you would need to either show that the prior probability is sufficiently large, or come up with a new method of calculating probabilities that does not depend on priors. The second problem is that you also need to justify that your memories are reliable. This could be done using induction and with a sufficiently large prior probability that memory works, but this has the same problems mentioned previously.

Comment author: ImmortalRationalist 20 July 2017 10:42:38AM 3 points [-]

For those in this thread signed up for cryonics, are you signed up with Alcor or the Cryonics Institute? And why did you choose that organization and not the other?

Comment author: ImmortalRationalist 20 July 2017 10:39:25AM 1 point [-]

Eliezer Yudkowsky wrote this article about the two things that rationalists need faith to believe in: That the statement "Induction works" has a sufficiently large prior probability, and that some single large ordinal that is well-ordered exists. Are there any ways to justify belief in either of these two things yet that do not require faith?

Comment author: ImmortalRationalist 14 July 2017 08:30:36PM 1 point [-]

Eliezer wrote this article a few years ago, about the 2 things that rationalists need faith to believe. Has any progress been made in finding justifications for either of these things that do not require faith?

Comment author: b4yes 12 July 2017 09:02:19PM 0 points [-]

Can't provide a number, because some of us don't know. And there is much noise in the IQ tests for higher values. Some of us have a math background with degrees and other accomplishments in that area. We guess we are around the LW average.

Comment author: ImmortalRationalist 14 July 2017 08:27:09PM 0 points [-]

We guess we are around the LW average.

What would you estimate to be the LW average?

Comment author: ImmortalRationalist 14 July 2017 07:57:24PM 0 points [-]

Although with a sufficiently advanced artificial superintelligence, it could probably prevent something like the scenario discussed in this article from occurring.

View more: Next