Comment author: turchin 28 March 2016 10:31:06AM 0 points [-]

You could get really random number using cosmic rays of remote quasars, but I think that true quantum randomness is not necessary in this case. Big world immortality could work anyway - there are many other earthes in the multiverse.

Superposition is also may be not necessary for QI to work. It may be useful if you want to make some kind of interaction between different outcomes, but it seems impossible for such large system.

The main thing which I would worry about, if I try to use QI to survive x-risks, is that the death of all civilization should be momentary. If it is not momentary, where will be a period of time when observers will know that given risk has began but they didn't die yet, and so they will be unable to "jump" to another outcome. Only false vacuum decay provide momentary death for everybody (but not exact simultaneous given Earth size of 12 000 km and limited speed of light).

Another option of using QI to survive x-risks is see that me-observer must survive any x-risks, if QI is true. So any x-risks will have at least one survivor, one wounded man on empty planet.

We could use this effect to ensure that a group of people survive, if we connect me-observer with that group by necessary condition of dying together. For example, we all locked in the submarine full of explosives. In most of the worlds there are two outcomes: all the crew of the submarine dies, or everybody survive.

If I am in such submarine, and QI works, we - all the crew - probably survive any x-risk.

In short the idea is to convert slow x-risks into a momentary catastrophe for a group of people. The same way we may use QI personally to fight slow dying from aging, if we sign up for cryonics.

Comment author: AlexLundborg 28 March 2016 11:49:30AM 1 point [-]

Whether or not momentary death is necessary for multiverse immortality depends on what view of personal identity is correct. According to empty individualism, it should not matter that you know you will die, you will still "survive" while not remember having died as if that memory was erased.

Comment author: kotrfa 20 October 2015 09:01:03PM *  0 points [-]

I am so sorry about not appearing on the meeting - I've got stuck in a train from east for several hours. I should have at least post it here when I knew that I can't make it. I am still really looking forward to meet you guys.

What about meeting on November 3 (Tuesday)?

Comment author: AlexLundborg 01 November 2015 04:14:32PM 0 points [-]

Do you mean Monday or Tuesday? :)

Comment author: Spectral_Dragon 19 October 2015 03:46:59PM 0 points [-]

Turns out quantum mechanics is much too great a challenge at the moment. I wish both other LWers a good day, and shall do my best to attend the next meeting instead - there is still interest from my part.

Comment author: AlexLundborg 20 October 2015 06:14:19PM 0 points [-]

Kotrfa never turned up but another LWer did and we had a nice discussion! When is next meeting? :)

Comment author: AlexLundborg 26 September 2015 02:02:54AM 0 points [-]

Count me in! I'm 20, also skinny and curly haired :)

Comment author: AlexLundborg 28 June 2015 08:12:52PM *  3 points [-]

You write that the orthogonality thesis "...states that beliefs and values are independent of each other", whereas Bostrom writes that it states that almost any level of intelligence is compatible with almost any values, isn't that a deviation? Could you motivate the choice of words here, thanks.

From The Superintelligent Will: "...the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal."

Comment author: Elo 24 June 2015 11:58:27PM 1 point [-]

Meta: You appear to have various negative responses; I am not completely clear as to why.

I found this idea useful to discover; while I can't really see its applications in modifying the way I access the real world, it certainly does raise some interesting ethical ideas.

I immediately thought of person X that I know in relation to the idea of ethics and consciousness. X (is real and) does not have the same ethics model as commonly found in people. They value themselves over and beyond other humans, both near and far (while not unlike many people; this is particularly abundant in their life). A classic label for this is "having a big ego" or "narcissism". If consciousness is reduced to "nothing but brain chemicals", the value of other entities is considerably lower than the value an entity might put on itself (because it can). Although this does seems like an application of fundamental attribution error (kinda also reverse-typical mind fallacy [AKA everyone else does not have a mind like me]) being the value one places internally is higher than that which is places on other external entities.

when adding the idea of "not much makes up consciousness", potentially unethical narcissism actions turn into boring, single-entity self-maximisation actions.

an entity which lacks capacity to reflect outwardly in the same capacity that it reflects inwards would have a nacissism problem (if it is a problem).

Should we value outside entities as much as we do ourselves? Why?

Comment author: AlexLundborg 26 June 2015 03:49:57PM *  0 points [-]

Should we value outside entities as much as we do ourselves? Why?

Nate Soares recently wrote about problems with using the word "should" that I think are relevant here, if we assume meta-ethical relativism (if there are no objective moral shoulds). I think his post "Caring about something larger than yourself" could be valuable in providing a personal answer to the question, if you accept meta-ethical relativism.

Comment author: Epictetus 24 June 2015 03:59:35AM 2 points [-]

We can talk about sweet and sound being “out there” in the world but in reality it is a useful fiction of sorts that we are “projecting” out into the world.

I hate to put on my Bishop Berkeley hat. Sweet and sound are things we can directly perceive. The very notion of something being "out there" independent of us is itself a mental model we use to explain our perceptions. We say that our sensation of sweetness is caused by a thing we call glucose. We can talk of glucose in terms of molecules, but as we can't actually see a molecule, we have to speak of it in terms of the effect it produces on a measurement apparatus.

The same holds for any scientific experiment. We come up with a theory that predicts that some phenomenon is to occur. To test it, we devise an apparatus and say that the phenomenon occurred if we observe the apparatus behave one way, and that it did not occur if we observe the apparatus to behave another way.

There's a bit of circular reasoning. We can come up with a scientific explanation of our perception of taste or color, but the very science we use depends upon the perceptions it tries to explain. The very notion of a world outside of ourselves is a theory used to explain certain regularities in our perceptions.

This is part of what makes consciousness a hard problem. Since consciousness is responsible for our perception of the world, it's very hard to take an outside view and define it in terms of other concepts.

Comment author: AlexLundborg 24 June 2015 11:49:31AM *  0 points [-]

The very notion of something being "out there" independent of us is itself a mental model we use to explain our perceptions.

Yes, I think that's right, the conviction that something exists in the world is also a (unconscious) judgement made by the mind that could be mistaken. However, when we what to explain why we have the perceptual data, and it's regularities, it makes sense to attribute it to external causes, but this conviction could perhaps too be mistaken. The underpinnings of rational reasoning seems to bottom out to in unconsciously formed convictions as well, basic arithmetic is obviously true but can I trust these convictions? Justifying logic with logic is indeed circular. At some point we just have to accept them in order to function in the world. The signs that these convictions are ofter useful suggest to me that we have some access to objective reality. But for everything I know, we could be Boltzmann brains floating around in high entropy with false convictions. Despite this, I think the assessment that objective reality exists and that our access and knowledge of it is limited but expandable is a sensible working hypothesis.

Comment author: Artaxerxes 22 June 2015 04:38:02AM 20 points [-]

A short, nicely animated adaptation of The Unfinished Fable of the Sparrows from Bostrom's book was made recently.

Comment author: AlexLundborg 22 June 2015 05:08:42AM *  11 points [-]

The same animation studio also made this fairly accurate and entertaining introduction to (parts of) Bostrom's argument. Although I don't know what to think of their (subjective) probability for possible outcomes.