Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: eternal_neophyte 22 March 2017 10:23:43AM *  0 points [-]

In combination with an AI with social skills that are fundamentally stunted in some way, this might actually work. If the AI cannot directly interface with the world in any meaningful way without the key and it doesn't have any power to persuade a human actor to supply it with the key, it's pretty much trapped ( unless there is some way for it to break its own encryption ).

Edit: notwithstanding the possibility that some human being may be stupid enough to supply it with the key despite not asking for it.

Comment author: korin43 23 March 2017 08:20:28PM 0 points [-]

I think being encrypted may not actually help much with the control problem, since the problem isn't that we expect an AI to fully understand what we want and then be evil, it's that we're worried that an AI will not be optimizing what we want. Not knowing what the outputs actually do doesn't seem like it would help at all (except that the AI would only have the inputs we want it to have).

Comment author: korin43 21 March 2017 03:18:12PM 0 points [-]

"In this blogpost, we're going to train a neural network that is fully encrypted during training (trained on unencrypted data). The result will be a neural network with two beneficial properties. First, the neural network's intelligence is protected from those who might want to steal it, allowing valuable AIs to be trained in insecure environments without risking theft of their intelligence. Secondly, the network can only make encrypted predictions (which presumably have no impact on the outside world because the outside world cannot understand the predictions without a secret key). This creates a valuable power imbalance between a user and a superintelligence. If the AI is homomorphically encrypted, then from it's perspective, the entire outside world is also homomorphically encrypted. A human controls the secret key and has the option to either unlock the AI itself (releasing it on the world) or just individual predictions the AI makes (seems safer)."

[Link] Building Safe A.I. - A Tutorial for Encrypted Deep Learning

2 korin43 21 March 2017 03:17PM
In response to LessWrong Discord
Comment author: korin43 13 March 2017 01:33:14PM 1 point [-]

Are you aware of the LessWrong Slack? Why Discord over that?

Comment author: J_Thomas_Moros 11 March 2017 04:53:40AM *  0 points [-]

The most direct actions you can take to increase your expected lifespan (beyond obvious things like eating) are to exercise regularly, avoid cars and extreme sports, and possibly make changes to your diet.

I said cryonics was the most direct action for increasing one's lifespan beyond the natural lifespan. The things you list are certainly the most direct actions for increasing your expected lifespan within its natural bounds. They may also indirectly increase your chance of living beyond your natural lifespan by increasing the chance you live to a point where life extension technology becomes available. Admittedly, I may place the chances of life extension technology being developed in the next 40 years lower than many less wrong readers.

With regards to my use of the survey statistics. I debated the best way to present those numbers that would be both clear and concise. For brevity I chose to lump the three "would like to" responses together because it actually made the objection to my core point look stronger. That is why I said "is consistent with". Additionally, some percentage of "can't afford" responses are actually respondents not placing a high enough priority on it rather than being literally unable to afford it. All that said, I do agree breaking out all the responses would be clearer.

I had to look through the survey data, but given that the median respondent said existing cryonics techniques have a 10% chance of working, it's not surprising that a majority haven't signed up for it.

I think this may be a failure to do the math. I'm not sure what chance I would give cryonics of working, but 10% may be high in my opinion. Still, when considering the value of being effectively immortal in a significantly better future even a 10% chance is highly valuable.

I wrote "Any course of action not involving going down and collecting the $100,000 would likely not be rational." I'm not ignoring opportunity costs and other motivations here. That is why I said "likely not be rational". I agree that in cryonics the opportunity costs are much higher than in my hypothetical example. I was attempting to establish the principle that action and belief should generally be in accord. That a large mismatch, as appears to me to be the case with cryonics, should call into question whether people are being rational. I don't deny that a rational agent could genuinely believe cryonics might work but place a low enough probability on it and have a high enough opportunity cost that they should choose not to sign up.


I'm glad to hear you think cryonics is very promising and should be getting a lot more research funding than it does. I'm hoping that perhaps I will be able to make some improvement in that area.

I find your statement about the probability of cryonics not working in common cases being low interesting. Personally, it seems to me that the level of technology required to revive a cryonics patient preserved under ideal conditions today is so advanced that even patients preserved under less than ideal conditions will be revivable too. By less than ideal conditions I mean a delay of some time before preservation.

Comment author: korin43 11 March 2017 04:00:06PM *  0 points [-]

I chose actions that will increase your lifespan in general, since that's strictly better than increasing the chance that if you live long enough for it to matter, you will live longer than your natural lifespan.

Evaluating the expected value of cryonics is hard because it runs into the same problem as Pascal's Wager, with a huge value in a lowe probability case. I'm not really sure how to handle that.

The reasons I don't think it's likely to work right now are:

  • Current processes may not preserve human sized brains well at all even in ideal conditions (successful cryonics experiments seem to involve animals much smaller than our brains)
  • Alcor may not do the preservation perfectly
  • The technology to reconstruct our brains from frozen ones may not be possible or might be so far off that the brain is damaged before it becomes possible
  • Alternately, you could use whole body preservation, but then the problems in my first point are significantly worse.
  • In non ideal conditions, your brain is dead and breaking down, and losing information permanently. A sufficiently powerful AI might be able to make reasonable guesses, but it's not clear how much the person they create would really be you after extensive damage.
  • The leading causes of death for people aged 15-34 are injury, suicide, and homicide. All of those have a might chance of involving trauma to the head, which makes things much worse. For example, someone who dies in a car crash is probably not going to get much value from cryonics. https://www.cdc.gov/injury/images/lc-charts/leading_causes_of_death_age_group_2014_1050w760h.gif

And this last one brings up my first point again: if I want to not die, it's much more effective to drive safely (or not drive), get adequate medical care, exercise, etc. than to focus in the small chance of surviving after my body is already dying.

Comment author: korin43 11 March 2017 03:20:13PM 0 points [-]

I just started doing this at my new job and found it extremely useful. I used to lose important mail in the backlog all the time, but now everything in my inbox is either unread or a reminder of a task I need to finish. I tend to leave my huge tasks in the inbox too, but I might change that if I start having a lot of them.

Comment author: korin43 05 March 2017 04:43:53PM *  1 point [-]

The first part was good. The ending seems to be making way too many assumptions about other people's motivations.

Consider that in a 2016 survey of Less Wrong users, only 48 of 1,660 or 2.9% of respondents answering the question said that they were “signed up or just finishing up paperwork” for cryonics. [Argument from authority here]. While this is certainly a much higher portion than the essentially 0% of Americans who are signed up for cryonics based on published membership numbers, it is still a tiny percentage when considering that cryonics is the most direct action one can take to increase the probability of living past one’s natural lifespan.

First off, this last sentence is probably wrong. The most direct actions you can take to increase your expected lifespan (beyond obvious things like eating) are to exercise regularly, avoid cars and extreme sports, and possibly make changes to your diet.

This objection is consistent with the fact that 515 or 31% of respondents to the question answered that they “would like to sign up,” but haven’t for various reasons. Beyond that, when asked “Do you think cryonics, as currently practiced by Alcor/Cryonics Institute will work?”, 71% of respondents answered yes or maybe.

I had to look through the survey data, but given that the median respondent said existing cryonics techniques have a 10% chance of working, it's not surprising that a majority haven't signed up for it. It's also very misleading how you group the "would like to" responses. 20% said they would like to but can't because it's either not offered where they live or they can't afford it. The relevant number for your argument is the 11% who said they would like to but haven't got around to it.

If a reliable and trustworthy source said that for the entire day, a major company or government was giving out $100,000 checks to everyone who showed up at a nearby location, what would be the rational course of action?

This example is exactly backwards for understanding why people don't agree with you about cryonics. Cryonics is very expensive and unlikely to work (right now), even in ideal scenarios (and I'm pretty sure that 10% median is for "will Alcor's process work at all", not, "how likely are you to survive cryonics if you die in a car crash thousands of miles away from their facility").

Any course of action not involving going down and collecting the $100,000 would likely not be rational.

Ignoring opportunity cost and motivations. If someone wants $100,000 more than whatever else they could be doing with that time, then yes. But as we see above, not everyone agrees that a tiny, tiny chance of living longer is worth (the opportunity cost of) hundreds of thousands of dollars.


And I should point out, I personally think cryonics is very promising and should be getting a lot more research funding than it does (not to mention not being so legally difficult), but I think the probability of it working in common cases like not dying inside Alcor's facility right now is very low.

In response to Humble Charlie
Comment author: korin43 27 February 2017 08:17:21PM 2 points [-]

In my series on GiveWell, I mentioned that my mother's friend Charlie, who runs a soup kitchen, gives away surplus donations to other charities, mostly ones he knows well. I used this as an example of the kind of behavior you might hope to see in a cooperative situation where people have convergent goals.

I recently had a chance to speak with Charlie, and he mentioned something else I found surprising: his soup kitchen made a decision not to accept donations online. They only took paper checks. This is because, since they get enough money that way, they don't want to accumulate more money that they don't know how to use.

When I asked why, Charlie told me that it would be bad for the donors to support a charity if they haven't shown up in person to have a sense of what it does.

At first I was confused. This didn't seem like very consequentialist thinking. I briefly considered the possibility that Charlie was being naïve, or irrationally traditionalist, or thinking about what resembles his idea of a good charity. But after thinking about it for a moment, I realized that Charlie was getting something deeply right that almost everyone gets wrong, at least where money was involved. He was trying to maximize benefits rather than costs, in a case where the costs are much easier to measure.

Comment author: niceguyanon 24 February 2017 04:29:46PM *  0 points [-]

Tangentially related, I'm surprised that students misjudge how high the cost of being late is to the cost of arriving early. I have a suspicion that people who insist on being exactly one minute early and no more are made up of two groups, the very efficient and the best procrastinators that are often late and when on time they get to pat themselves on the back for being efficient.

Getting to class early just to sit in the front row is the easiest way to boost your grade for most classes, IMO as an armchair psychologist.

Comment author: korin43 25 February 2017 02:11:23PM 1 point [-]

And if you're early, you can either talk to friends or read. I always try to show up at least ten minutes early to things and then use the extra time to do the reading I would have done at home later.

Comment author: Jiro 18 February 2017 04:39:40PM *  0 points [-]

This was about paying people at the price level that requires hiring a random person, not hiring professional movers. I'm pretty sure the $20 guy off of Craigslist isn't insured when he breaks your vases, and there's also a chance that if the move goes bad he'll just disappear (no fixed business address). I'm also pretty sure that there's nothing in practice keeping him from saying "okay, now that it's all on our truck we won't unload unless you pay us $300", at which point you either pay, or sue him while they have physical custody of all your property.

Comment author: korin43 19 February 2017 03:15:54AM 0 points [-]

I'm not insured if I break my own vases, so how does this argue against my point that you should pay other people to move your stuff? If you also want insurance then you should hire a fancier moving company than I do.

Regarding the truck, I always rent my own truck and pay other people to pack it.

View more: Next