Comment author: turchin 28 March 2016 10:31:06AM 0 points [-]

You could get really random number using cosmic rays of remote quasars, but I think that true quantum randomness is not necessary in this case. Big world immortality could work anyway - there are many other earthes in the multiverse.

Superposition is also may be not necessary for QI to work. It may be useful if you want to make some kind of interaction between different outcomes, but it seems impossible for such large system.

The main thing which I would worry about, if I try to use QI to survive x-risks, is that the death of all civilization should be momentary. If it is not momentary, where will be a period of time when observers will know that given risk has began but they didn't die yet, and so they will be unable to "jump" to another outcome. Only false vacuum decay provide momentary death for everybody (but not exact simultaneous given Earth size of 12 000 km and limited speed of light).

Another option of using QI to survive x-risks is see that me-observer must survive any x-risks, if QI is true. So any x-risks will have at least one survivor, one wounded man on empty planet.

We could use this effect to ensure that a group of people survive, if we connect me-observer with that group by necessary condition of dying together. For example, we all locked in the submarine full of explosives. In most of the worlds there are two outcomes: all the crew of the submarine dies, or everybody survive.

If I am in such submarine, and QI works, we - all the crew - probably survive any x-risk.

In short the idea is to convert slow x-risks into a momentary catastrophe for a group of people. The same way we may use QI personally to fight slow dying from aging, if we sign up for cryonics.

Comment author: AlexLundborg 28 March 2016 11:49:30AM 1 point [-]

Whether or not momentary death is necessary for multiverse immortality depends on what view of personal identity is correct. According to empty individualism, it should not matter that you know you will die, you will still "survive" while not remember having died as if that memory was erased.

Comment author: kotrfa 20 October 2015 09:01:03PM *  0 points [-]

I am so sorry about not appearing on the meeting - I've got stuck in a train from east for several hours. I should have at least post it here when I knew that I can't make it. I am still really looking forward to meet you guys.

What about meeting on November 3 (Tuesday)?

Comment author: AlexLundborg 01 November 2015 04:14:32PM 0 points [-]

Do you mean Monday or Tuesday? :)

Comment author: Spectral_Dragon 19 October 2015 03:46:59PM 0 points [-]

Turns out quantum mechanics is much too great a challenge at the moment. I wish both other LWers a good day, and shall do my best to attend the next meeting instead - there is still interest from my part.

Comment author: AlexLundborg 20 October 2015 06:14:19PM 0 points [-]

Kotrfa never turned up but another LWer did and we had a nice discussion! When is next meeting? :)

Comment author: AlexLundborg 26 September 2015 02:02:54AM 0 points [-]

Count me in! I'm 20, also skinny and curly haired :)

Comment author: AlexLundborg 28 June 2015 08:12:52PM *  3 points [-]

You write that the orthogonality thesis "...states that beliefs and values are independent of each other", whereas Bostrom writes that it states that almost any level of intelligence is compatible with almost any values, isn't that a deviation? Could you motivate the choice of words here, thanks.

From The Superintelligent Will: "...the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal."

Comment author: Elo 24 June 2015 11:58:27PM 1 point [-]

Meta: You appear to have various negative responses; I am not completely clear as to why.

I found this idea useful to discover; while I can't really see its applications in modifying the way I access the real world, it certainly does raise some interesting ethical ideas.

I immediately thought of person X that I know in relation to the idea of ethics and consciousness. X (is real and) does not have the same ethics model as commonly found in people. They value themselves over and beyond other humans, both near and far (while not unlike many people; this is particularly abundant in their life). A classic label for this is "having a big ego" or "narcissism". If consciousness is reduced to "nothing but brain chemicals", the value of other entities is considerably lower than the value an entity might put on itself (because it can). Although this does seems like an application of fundamental attribution error (kinda also reverse-typical mind fallacy [AKA everyone else does not have a mind like me]) being the value one places internally is higher than that which is places on other external entities.

when adding the idea of "not much makes up consciousness", potentially unethical narcissism actions turn into boring, single-entity self-maximisation actions.

an entity which lacks capacity to reflect outwardly in the same capacity that it reflects inwards would have a nacissism problem (if it is a problem).

Should we value outside entities as much as we do ourselves? Why?

Comment author: AlexLundborg 26 June 2015 03:49:57PM *  0 points [-]

Should we value outside entities as much as we do ourselves? Why?

Nate Soares recently wrote about problems with using the word "should" that I think are relevant here, if we assume meta-ethical relativism (if there are no objective moral shoulds). I think his post "Caring about something larger than yourself" could be valuable in providing a personal answer to the question, if you accept meta-ethical relativism.

Comment author: Epictetus 24 June 2015 03:59:35AM 2 points [-]

We can talk about sweet and sound being “out there” in the world but in reality it is a useful fiction of sorts that we are “projecting” out into the world.

I hate to put on my Bishop Berkeley hat. Sweet and sound are things we can directly perceive. The very notion of something being "out there" independent of us is itself a mental model we use to explain our perceptions. We say that our sensation of sweetness is caused by a thing we call glucose. We can talk of glucose in terms of molecules, but as we can't actually see a molecule, we have to speak of it in terms of the effect it produces on a measurement apparatus.

The same holds for any scientific experiment. We come up with a theory that predicts that some phenomenon is to occur. To test it, we devise an apparatus and say that the phenomenon occurred if we observe the apparatus behave one way, and that it did not occur if we observe the apparatus to behave another way.

There's a bit of circular reasoning. We can come up with a scientific explanation of our perception of taste or color, but the very science we use depends upon the perceptions it tries to explain. The very notion of a world outside of ourselves is a theory used to explain certain regularities in our perceptions.

This is part of what makes consciousness a hard problem. Since consciousness is responsible for our perception of the world, it's very hard to take an outside view and define it in terms of other concepts.

Comment author: AlexLundborg 24 June 2015 11:49:31AM *  0 points [-]

The very notion of something being "out there" independent of us is itself a mental model we use to explain our perceptions.

Yes, I think that's right, the conviction that something exists in the world is also a (unconscious) judgement made by the mind that could be mistaken. However, when we what to explain why we have the perceptual data, and it's regularities, it makes sense to attribute it to external causes, but this conviction could perhaps too be mistaken. The underpinnings of rational reasoning seems to bottom out to in unconsciously formed convictions as well, basic arithmetic is obviously true but can I trust these convictions? Justifying logic with logic is indeed circular. At some point we just have to accept them in order to function in the world. The signs that these convictions are ofter useful suggest to me that we have some access to objective reality. But for everything I know, we could be Boltzmann brains floating around in high entropy with false convictions. Despite this, I think the assessment that objective reality exists and that our access and knowledge of it is limited but expandable is a sensible working hypothesis.

​My recent thoughts on consciousness

0 AlexLundborg 24 June 2015 12:37AM

I have lately come to seriously consider the view that the everyday notion of consciousness doesn’t refer to anything that exists out there in the world but is rather a confused (but useful) projection made by purely physical minds onto their depiction of themselves in the world. The main influences on my thinking are Dan Dennett, (I assume most of you are familiar with him)  and to a lesser extent Yudkowsky (1) and Tomasik (2). To use Dennett’s line of thought: we say that honey is sweet, that metal is solid or that a falling tree makes a sound, but the character tag of sweetness and sounds is not in the world but in the brains internal model of it. Sweetness in not an inherent property of the glucose molecule, instead, we are wired by evolution to perceive it as sweet to reward us for calorie intake in our ancestral environment, and there is neither any need for non-physical sweetness-juice in the brain – no, it's coded (3). We can talk about sweetness and sound as if being out there in the world but in reality it is a useful fiction of sorts that we are "projecting" out into the world. The default model of our surroundings and ourselves we use in our daily lives (the manifest image, or ’umwelt’) is puzzling to reconcile with the scientific perspective of gluons and quarks. We can use this insight to look critically on how we perceive a very familiar part of the world: ourselves. It might be that we are projecting useful fictions onto our model of ourselves as well. Our normal perception of consciousness is perhaps like the sweetness of honey, something we think exist in the world, when it is in fact a judgement about the world made (unconsciously) by the mind.

What we are pointing at with the judgement “I am conscious” is perhaps the competence that we have to access states about the world, form expectations about those states and judge their value to us, coded in by evolution. That is, under this view, equivalent with saying that suger is made of glucose molecules, not sweetness-magic. In everyday language we can talk about suger as sweet and consciousness as “something-to-be-like-ness“ or “having qualia”, which is useful and probably necessary for us to function, but that is a somewhat misleading projection made by our ​​world-accessing and assessing consciousness that really exists in the world. That notion of consciousness is not subject to the Hard Problem, it may not be an easy problem to figure out how consciousness works, but it does not appear impossible to explain it scientifically as pure matter like anything else in the natural world, at least in theory. I’m pretty confident that we will solve consciousness, if we by consciousness mean the competence of a biological system to access states about the world, make judgements and form expectations. That is however not what most people mean when they say consciousness. Just like ”real” magic refers to the magic that isn’t real and the magic that is real, that can be performed in the world, is not “real magic”, “real” consciousness turns out to be a useful, but misleading assessment (4). We should perhaps keep the word consciousness but adjust what we mean when we use it, for diplomacy.

Having said that, I still find myself baffled by the idea that I might not be conscious in the way I’ve found completely obvious before. Consciousness seems so mysterious and unanswerable, so it’s not surprising then that the explanation provided by physicalists like Dennett isn’t the most satisfying. Despite that, I think it’s the best explanation I've found so far, so I’m trying to cope with it the best I can. One of the problems I’ve had with the idea is how it has required me to rethink my views on ethics. I sympathize with moral realism, the view that there exist moral facts, by pointing to the strong intuition that suffering seems universally bad, and well-being seems universally good. Nobody wants to suffer agonizing pain, everyone wants beatific eudaimonia, and it doesn't feel like an arbitrary choice to care about the realization of these preferences in all sentience to a high degree, instead of any other possible goal like paperclip maximization. It appeared to me to be an unescapable fact about the universe that agonizing pain really is bad (ought to be prevented), that intelligent bliss really is good (ought to be pursued), like a label to distinguish wavelength of light in the brain really is red, and that you can build up moral values from there. I have a strong gut feeling that the well-being of sentience matters, and the more capacity a creature has of receiving pain and pleasure the more weight it is given, say a gradience from beetles to posthumans that could perhaps be understood by further inquiry of the brain (5). However, if it turns out that pain and pleasure isn’t more than convincing judgements by a biological computer network in my head, no different in kind to any other computation or judgement, the sense of seriousness and urgency of suffering appears to fade away. Recently, I’ve loosened up a bit to accept a weaker grounding for morality: I still think that my own well-being matter, and I would be inconsistent if I didn’t think the same about other collections of atoms that appears functionally similar to ’me’, who also claim, or appear, to care about their well-being. I can’t answer why I should care about my own well-being though, I just have to. Speaking of 'me': personal identity also looks very different (nonexistent?) under physicalism, than in the everyday manifest image (6).

Another difficulty I confront is why e.g. colors and sounds looks and sounds the way they do or why they have any quality at all, under this explanation. Where do they come from if they’re only labels my brain uses to distinguish inputs from the senses? Where does the yellowness of yellow come? Maybe it’s not a sensible question, but only the murmuring of a confused primate. Then again, where does anything come from? If we can learn to shut up our bafflement about consciousness and sensibly reduce it down to physics – fair enough, but where does physics come from? That mystery remains, and that will possibly always be out of reach, at least probably before advanced superintelligent philosophers. For now, understanding how a physical computational system represents the world, creates judgements and expectations from perception presents enough of a challenge. It seems to be a good starting point to explore anyway (7).


I did not really put forth any particularly new ideas here, this is just some of my thoughts and repetitions of what I have read and heard others say, so I'm not sure if this post adds any value. My hope is that someone will at least find some of my references useful, and that it can provide a starting point for discussion. Take into account that this is my first post here, I am very grateful to receive input and criticism! :-)

  1. Check out Eliezer's hilarious tear down of philosophical zombies if you haven't already
  2. http://reducing-suffering.org/hard-problem-consciousness/
  3. [Video] TED talk by Dan Dennett http://www.ted.com/talks/dan_dennett_cute_sexy_sweet_funny
  4. http://ase.tufts.edu/cogstud/dennett/papers/explainingmagic.pdf
  5. Reading “The Moral Landscape” by Sam Harris increased my confidence in moral realism. Whether moral realism is true of false can obviously have implications for approaches to the value learning problem in AI alignment, and for the factual accuracy of the orthogonality thesis
  6. http://www.lehigh.edu/~mhb0/Dennett-WhereAmI.pdf
  7. For anyone interested in getting a grasp of this scientific challenge I strongly recommend the book “A User’s Guide to Thought and Meaning” by Ray Jackendoff.



Edit: made some minor changes and corrections. Edit 2: made additional changes in the first paragraph for increased readability.

 


Comment author: Artaxerxes 22 June 2015 04:38:02AM 20 points [-]

A short, nicely animated adaptation of The Unfinished Fable of the Sparrows from Bostrom's book was made recently.

Comment author: AlexLundborg 22 June 2015 05:08:42AM *  11 points [-]

The same animation studio also made this fairly accurate and entertaining introduction to (parts of) Bostrom's argument. Although I don't know what to think of their (subjective) probability for possible outcomes.