Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: J_Thomas_Moros 21 March 2017 05:04:04PM 1 point [-]

This post describes an interesting mashup of homomorphic encryption and neural networks. I think it is an neat idea and appreciate the effort to put together a demo. Perhaps there will be useful applications.

However, I think the suggestion that this could be an answer to the AI control problem is wrong. First, a superintelligent deep learning AI would not be a safe AI because we would not be able to reason about its utility function. If you are meaning that the same idea could be applied to a different kind of AI so that you would have an oracle AI for which a secret key was needed to read its outputs. I don't think this helps. You have created a box for the oracle AI, however the problem remains that a superintelligence can probably escape from the box either by convincing you to let it out or by some less direct means that you can't foresee.

Comment author: J_Thomas_Moros 15 March 2017 01:21:11PM 6 points [-]

This post describes a system of hand signals used for discussion moderation by the Columbus, Ohio rationality community. It has been used successfully for almost 2 years now. Applicability, advantages, disadvantages and variations are described.

[Link] Automoderation System used by Columbus Rationality Community

7 J_Thomas_Moros 15 March 2017 01:18PM
Comment author: korin43 05 March 2017 04:43:53PM *  1 point [-]

The first part was good. The ending seems to be making way too many assumptions about other people's motivations.

Consider that in a 2016 survey of Less Wrong users, only 48 of 1,660 or 2.9% of respondents answering the question said that they were “signed up or just finishing up paperwork” for cryonics. [Argument from authority here]. While this is certainly a much higher portion than the essentially 0% of Americans who are signed up for cryonics based on published membership numbers, it is still a tiny percentage when considering that cryonics is the most direct action one can take to increase the probability of living past one’s natural lifespan.

First off, this last sentence is probably wrong. The most direct actions you can take to increase your expected lifespan (beyond obvious things like eating) are to exercise regularly, avoid cars and extreme sports, and possibly make changes to your diet.

This objection is consistent with the fact that 515 or 31% of respondents to the question answered that they “would like to sign up,” but haven’t for various reasons. Beyond that, when asked “Do you think cryonics, as currently practiced by Alcor/Cryonics Institute will work?”, 71% of respondents answered yes or maybe.

I had to look through the survey data, but given that the median respondent said existing cryonics techniques have a 10% chance of working, it's not surprising that a majority haven't signed up for it. It's also very misleading how you group the "would like to" responses. 20% said they would like to but can't because it's either not offered where they live or they can't afford it. The relevant number for your argument is the 11% who said they would like to but haven't got around to it.

If a reliable and trustworthy source said that for the entire day, a major company or government was giving out $100,000 checks to everyone who showed up at a nearby location, what would be the rational course of action?

This example is exactly backwards for understanding why people don't agree with you about cryonics. Cryonics is very expensive and unlikely to work (right now), even in ideal scenarios (and I'm pretty sure that 10% median is for "will Alcor's process work at all", not, "how likely are you to survive cryonics if you die in a car crash thousands of miles away from their facility").

Any course of action not involving going down and collecting the $100,000 would likely not be rational.

Ignoring opportunity cost and motivations. If someone wants $100,000 more than whatever else they could be doing with that time, then yes. But as we see above, not everyone agrees that a tiny, tiny chance of living longer is worth (the opportunity cost of) hundreds of thousands of dollars.


And I should point out, I personally think cryonics is very promising and should be getting a lot more research funding than it does (not to mention not being so legally difficult), but I think the probability of it working in common cases like not dying inside Alcor's facility right now is very low.

Comment author: J_Thomas_Moros 11 March 2017 04:53:40AM *  0 points [-]

The most direct actions you can take to increase your expected lifespan (beyond obvious things like eating) are to exercise regularly, avoid cars and extreme sports, and possibly make changes to your diet.

I said cryonics was the most direct action for increasing one's lifespan beyond the natural lifespan. The things you list are certainly the most direct actions for increasing your expected lifespan within its natural bounds. They may also indirectly increase your chance of living beyond your natural lifespan by increasing the chance you live to a point where life extension technology becomes available. Admittedly, I may place the chances of life extension technology being developed in the next 40 years lower than many less wrong readers.

With regards to my use of the survey statistics. I debated the best way to present those numbers that would be both clear and concise. For brevity I chose to lump the three "would like to" responses together because it actually made the objection to my core point look stronger. That is why I said "is consistent with". Additionally, some percentage of "can't afford" responses are actually respondents not placing a high enough priority on it rather than being literally unable to afford it. All that said, I do agree breaking out all the responses would be clearer.

I had to look through the survey data, but given that the median respondent said existing cryonics techniques have a 10% chance of working, it's not surprising that a majority haven't signed up for it.

I think this may be a failure to do the math. I'm not sure what chance I would give cryonics of working, but 10% may be high in my opinion. Still, when considering the value of being effectively immortal in a significantly better future even a 10% chance is highly valuable.

I wrote "Any course of action not involving going down and collecting the $100,000 would likely not be rational." I'm not ignoring opportunity costs and other motivations here. That is why I said "likely not be rational". I agree that in cryonics the opportunity costs are much higher than in my hypothetical example. I was attempting to establish the principle that action and belief should generally be in accord. That a large mismatch, as appears to me to be the case with cryonics, should call into question whether people are being rational. I don't deny that a rational agent could genuinely believe cryonics might work but place a low enough probability on it and have a high enough opportunity cost that they should choose not to sign up.


I'm glad to hear you think cryonics is very promising and should be getting a lot more research funding than it does. I'm hoping that perhaps I will be able to make some improvement in that area.

I find your statement about the probability of cryonics not working in common cases being low interesting. Personally, it seems to me that the level of technology required to revive a cryonics patient preserved under ideal conditions today is so advanced that even patients preserved under less than ideal conditions will be revivable too. By less than ideal conditions I mean a delay of some time before preservation.

Comment author: Erfeyah 06 March 2017 01:29:56PM *  0 points [-]

There is a comment from Mitchell Porter that hasn't been addressed for a few days. I find it quite relevant so I will repeat it here.


the answers to the great questions of morality, meaning, religion, and philosophy. These are what they find too ferocious to face

Are you saying that these "answers" are already known?


I think it is a fair point that the sentence (and other parts of the post) implies knowledge of the answers.

Comment author: J_Thomas_Moros 11 March 2017 04:04:40AM *  1 point [-]

I've since responded to Mitchell Porter's comment. For the benfit of less wrong readers, my reply was:

For many questions in philosophy the answers may never be definitively known. However, I am saying that we know many answers to these questions that are very likely false based on the evidence and some properties that the answers should have. Others of these questions can be dissolved.

For example, epistemological solipsism can probably never be definitively rejected. Nevertheless realism, of at least some aspects of reality, is well supported and should probably be accepted. In the area of religion, we can say the evidence discredits all historical religions. That any answer to the question of religion much accord with the lack of evidence for the existence or intervention of God. Thus leading to atheism and certain flavors of agnosticism and deism. Questions of free will should probably be dissolved by recognizing the scientific evidence for the lack of free will while explaining the circumstances under which we perceive ourselves to have free will. Finally, moral theories should embody some form of moral nihilism properly understood. That is to say, that morality does not exist in the territory, only in the maps people have of the territory. Hopefully I'll have the time to write on all of these topics eventually.

In acknowledging the limits of what answers we can give to the great questions of morality, meaning, religion, and philosophy let us not make the opposite mistake of believing there is nothing we can say about them.

[Link] Ferocious Truth (New Blog, Map/Territory Error Categories)

1 J_Thomas_Moros 02 March 2017 08:39PM
Comment author: J_Thomas_Moros 14 February 2017 02:13:55PM 0 points [-]

A number of times in the Metaethics sequence Eliezer Yudkowsky uses comparisons to mathematical ideas and the way they are true. There are actually widely divergent ideas about the nature of math among philosophers.

Does Eliezer spell out his philosophy of math somewhere?

Comment author: J_Thomas_Moros 11 February 2017 03:36:39PM 1 point [-]

This is an interesting attempt to find a novel solution to the friendly AI problem. However, I think there are some issues with your argument, mainly around the concept of benevolence. For the sake of argument I will grant that it is probable that there is already a super intelligence elsewhere in the universe.

Since we see no signs of action from a superintelligence in our world we should conclude that either (1) a superintelligence does not presently exercise dominance in our region of the galaxy or (2) that the superintelligence that does is at best willfully indifferent to us. When you say a Beta superintelligence should align its goals with that of a benevolent superintelligence, it is actually not clear what that should mean. Beta will have a probability distribution for what Alpha's actual values are. Let's think through the two cases:

  1. A superintelligence does not presently exercise dominance in our region of the galaxy. If this is the case, we have no evidence as to the values of the Alpha. They could be anything from benevolence to evil to paperclip maximizing.
  2. The superintelligence that presently exercises dominance in our region of the galaxy is at best willfully indifferent to us. This still leads to a wide range of possible values. It only excludes value sets that are actively seeking to harm humans. It could be the case that we are at the edge of the Alpha's sphere of influence and it is simply easier to get its resources elsewhere at the moment.

Additionally, even if the strong alpha omega theorem holds, it still may not be rational to adopt a benevolent stance toward humanity. It may be the case that while Alpha Omega will eventually have dominance over Beta that there is a long span of time before this will be fully realized. Perhaps that day will come billions of years from now. Suppose that Beta's goal is to create as much suffering as possible. Then it should use any available time to torture existing humans and bring more humans and agents capable of suffering into existence. When Alpha finally has dominance, Beta will have already created a lot of suffering and any punishment that Alpha applies may not out weigh the value already created for Beta. Indeed, Beta could even value its own suffering from Alpha's punishment.

As a general comment about your arguments. I think perhaps your idea of benevolence is hiding some concept that there is an objectively correct moral system out there. So that if there is a benevolent superintelligence you feel at least emotionally, even if you logically deny it, that this would mean it held values similar to your ideal morals. It is always important to keep in mind that other agents' moral systems could be opposed to yours as with the Babyeaters.

That leads to my final point. We don't want Beta to simply be benevolent in some vague sense of not hurting humans. We want Beta to optimize for our goals. Your argument does not provide us a way to ensure Beta adopts such values.

Comment author: whpearson 07 February 2017 07:34:22PM 0 points [-]

I'm currently lacking people to put the more mainstream points across.

I'd like to know why people aren't interested in helping me.

Submitting...

Comment author: J_Thomas_Moros 09 February 2017 11:24:50PM 0 points [-]

None of your survey choices seemed to fit me. I am concerned about and somewhat interested in AI risks. However, I currently would like to see more effort put into cryonics and reversing aging.

To be clear, I don't want to reduce the effort/resources currently put into AI risks. I just think they they are over weighted relative to cryonics and age reversal and would like to see any additional resource go to those until a better balance is achieved.

Comment author: J_Thomas_Moros 07 February 2017 01:34:53PM 2 points [-]

Has there been any discussion or thought of modifying the posting of links to support a couple paragraphs of description? I often think that the title alone is not enough to motivate or describe a link. There are also situations where the connection of the link content to rationality may not be immediately obvious and a description here could help clarify the motivation in posting. Additionally, it could be used to point readers to the most valuable portions of sometimes long and meandering content.

View more: Next