Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: 3p1cd3m0n 04 January 2015 08:12:28PM *  1 point [-]

If I understand Solomonoff Induction correctly, for all n and p the the sum of the probabilities of all the hypotheses of length n equals the sum of the probabilities of hypotheses of length p. If this is the case, what normalization constant could you possibility use to make all the probabilities sum to one? It seems there are none.

Comment author: pengvado 07 January 2015 12:58:28PM *  2 points [-]

Use a prefix-free encoding for the hypotheses. There's not 2^n hypotheses of length n: Some of the length-n bitstrings are incomplete and you'd need to add more bits in order to get a hypothesis; others are actually a length <n hypothesis plus some gibberish on the end.

Then the sum of the probabilities of all programs of all lengths combined is 1.0. After excluding the programs that don't halt, the normalization constant is Chaitin's Omega.

Comment author: maxikov 02 December 2014 08:02:29AM 8 points [-]

Good futurology is different from storytelling in that it tries to make as few assumptions as possible. How many assumptions do we need to allow cryonics to work? Well, a lot.

  • The true point of no return has to be indeed much later than we believe it to be now. (Besides does it even exist at all? Maybe a super-advanced civilization can collect enough information to backtrack every single process in the universe down to the point of one's death. Or maybe not)

  • Our vitrification technology is not a secure erase procedure. Pharaohs also thought that their mummification technology is not a secure erase procedure. Even though we have orders of magnitude more evidence to believe we're not mistaken this time, ultimately, it's the experiment that judges.

  • Timeless identity is correct, and it's you rather than your copy that wakes up.

  • We will figure brain scanning.

  • We will figure brain simulation.

  • Alternatively, we will figure nanites, and a way to make them work through the ice.

  • We will figure all that sooner than the expected time of the brain being destroyed by: slow crystal formation; power outages; earthquakes; terrorist attacks; meteor strikes; going bankrupt; economy collapse; nuclear war; unfriendly AI, etc. That's similar to the longevity escape velocity, although slower: to survive, you don't just have to advance technologies, you have to advance them fast enough.

All that combined, the probability of working out is really darn low. Yes, it is much better than zero, but still low. If I were to play Russian roulette, I would be happy to learn that instead of six bullets I'm playing with five. However, this relief would not stop me from being extremely motivated to remove even more bullets from the cylinder.

The reason why the belief in afterlife is not just neutral but harmful for modern people is that it demotivates them from doing immortality research. Dying is sure scary, we won't truly die, so problem solved, let's do something else. And I'm worried about cryonics becoming this kind of a comforting story for transhumanists. Yes, actually removing one bullet from the cylinder is much much better than hoping that Superman will appear in the last moment, and stop the bullet. But stopping after removing just one bullet isn't a good idea either. Some amount of resources are devoted to the conventional longevity research, but as far as I understand, we're not hoping to achieve the longevity escape velocity for currently living people, especially adults. Cryonics appear to be our only chance to avoid death, and I would be extremely motivated to try to make our only chance as high as we can possibly make it. And I don't think we're trying hard.

Comment author: pengvado 05 December 2014 10:56:04AM 0 points [-]

The true point of no return has to be indeed much later than we believe it to be now.

Who is "we", and what do "we" believe about the point of no return? Surely you're not talking about ordinary doctors pronouncing medical death, because that's just irrelevant (pronouncements of medical death are assertions about what current medicine can repair, not about information-theoretic death). But I don't know what other consensus you could be referring to.

Comment author: MockTurtle 20 November 2014 11:43:47AM 0 points [-]

How do people who sign up to cryonics, or want to sign up to cryonics, get over the fact that if they died, there would no-longer be a mind there to care about being revived at a later date? I don't know how much of it is morbid rationalisation on my part just because signing up to cryonics in the UK seems not quite as reliable/easy as in the US somehow, but it still seems like a real issue to me.

Obviously, when I'm awake, I enjoy life, and want to keep enjoying life. I make plans for tomorrow, and want to be alive tomorrow, despite the fact that in between, there will be a time (during sleep) where I will no-longer care about being alive tomorrow. But if I were killed in my sleep, at no point would I be upset - I would be unaware of it beforehand, and my mind would no-longer be active to care about anything afterwards.

I'm definitely confused about this. I think the central confusion is something like: why should I be willing to spend effort and money at time A to ensure I am alive at time C, when I know that I will not care at all about this at an intermediate time B?

I'm pretty sure I'd be willing to pay a certain amount of money every evening to lower some artificial probability of being killed while I slept. So why am I not similarly willing to pay a certain amount to increase the chance I will awaken from the Dreamless Sleep? Does anyone else think about this before signing up for cryonics?

Comment author: pengvado 03 December 2014 06:48:18PM 1 point [-]

I think your answer is in The Domain of Your Utility Function. That post isn't specifically about cryonics, but is about how you can care about possible futures in which you will be dead. If you understand both of the perspectives therein and are still confused, then I can elaborate.

Comment author: [deleted] 16 September 2014 05:30:12PM *  1 point [-]

You're correct that this includes problems where UDT performs poorly, and that UDT is by no means the One Final Answer.

What problems does UDT fail on?

my goal is to motivate the idea that we don't know enough about decision theory yet to be comfortable constructing a system capable of undergoing an intelligence explosion.

Why would a self-improving agent not improve its own decision-theory to reach an optimum without human intervention, given a "comfortable" utility function in the first place?

Comment author: pengvado 18 September 2014 08:42:44AM 3 points [-]

Why would a self-improving agent not improve its own decision-theory to reach an optimum without human intervention, given a "comfortable" utility function in the first place?

A self-improving agent does improve its own decision theory, but it uses its current decision theory to predict which self-modifications would be improvements, and broken decision theories can be wrong about that. Not all starting points converge to the same answer.

Comment author: dankane 16 September 2014 02:11:07AM 1 point [-]

Only if the adversary makes its decision to attempt extortion regardless of the probability of success.

And thereby the extortioner's optimal strategy is to extort independently of the probably of success. Actually, this is probably true is a lot of real cases (say ransomware) where the extortioner cannot actually ascertain the probably of success ahead of time.

Comment author: pengvado 16 September 2014 05:00:22AM 2 points [-]

That strategy is optimal if and only if the probably of success was reasonably high after all. Otoh, if you put an unconditional extortioner in an environment mostly populated by decision theories that refuse extortion, then the extortioner will start a war and end up on the losing side.

Comment author: [deleted] 15 July 2014 04:54:45AM *  2 points [-]

Imagine there were drugs that could remove the sensation of consciousness. However, that's all they do. They don't knock you unconscious like an anaesthetic; you still maintain motor functions, memory, sensory, and decision-making capabilities. So you can still drive a car safely, people can still talk to you coherently, and after the drugs wear off you'll remember what things you said and did.

That doesn't make any sense to me. If you were on that drug and I asked you "how do you feel?" and you said "I feel angry" or "I feel sad" ,,, that would be a conscious experience. I don't think the setup makes any sense. If you are going about your day doing your daily things, you are conscious. And this has nothing to do with remembering what happened -- as I said in a different reply, you are also conscious in the grandparent's sense when you are dreaming, even if you don't remember the dream when you wake up.

Comment author: pengvado 15 July 2014 03:17:36PM 1 point [-]

Jbay didn't specify that the drug has to leave people able to answer questions about their own emotional state. And in fact there are some people who can't do that, even though they're otherwise functional.

Comment author: Squark 26 March 2014 08:51:30PM *  -1 points [-]

I'm probably explaining myself poorly.

I'm suggesting that there should be a mathematical operator which takes a "digitized" representation of an agent, either in white-box form (e.g. uploaded human brain) or in black-box form (e.g. chatroom logs) and produces a utility function. There is nothing human-specific in the definition of the operator: it can as well be applied to e.g. another AI, an animal or an alien. It is the input we provide the operator that selects a human utility function.

Comment author: pengvado 27 March 2014 12:10:21AM *  0 points [-]

There are many such operators, and different ones give different answers when presented with the same agent. Only a human utility function distinguishes the right way of interpreting a human mind as having a utility function from all of the wrong ways of interpreting a human mind as having a utility function. So you need to get a bunch of Friendliness Theory right before you can bootstrap.

Comment author: ArisKatsaris 30 January 2014 01:43:25AM 1 point [-]

Can anyone recommend a good replacement for flagfic.com ? This was a site that could download stories from various archives (fanfiction.net, fimfiction.net, etc) transform them to various e-reader formats, and email them to you. I used it to email fanfics I wanted to read directly to my Kindle as .mobi files.

Comment author: pengvado 30 January 2014 12:14:48PM 0 points [-]

fanficdownloader. I haven't tried the webapp version of it, but I'm happy with the CLI.

In response to Why CFAR?
Comment author: pengvado 07 January 2014 08:54:39AM *  53 points [-]

I donated $40,000.00

Comment author: pragmatist 04 January 2014 07:27:22AM 1 point [-]

A low entropy microstate takes fewer bits to specify once you're given the macrostate to which it belongs, since low entropy macrostates are instantiated by fewer microstates than high entropy ones. But I don't see why that should be the relevant way to determine simplicity. The extra bits are just being smuggled into the macrostate description. If you're trying to simply specify the microstate without any prior information about the macrostate, then it seems to me that any microstate -- low or high entropy -- should take the same number of bits to specify, no?

Comment author: pengvado 04 January 2014 01:42:57PM *  2 points [-]

If you can encode microstate s in n bits, that implies that you have a prior that assigns P(s)=2^-n. The set of all possible microstates is countably infinite. There is no such thing as a uniform distribution over a countably infinite set. Therefore, even the ignorance prior can't assign equal length bitstrings to all microstates.

View more: Next