All of Shamash's Comments + Replies

Answer by Shamash*31

I think the simplest way to answer this is to introduce a new scenario. Let's call it Scenario 0. Scenario 0 is similar to Scenario 1, but in this case your body is not disintegrated. The result seems pretty clear: you are unaffected and continue living life on earth. Other yous may be living their own lives in space but it isn't as if there is some kind of metaphysical consciousness link that connects you to them.

And so, in scenarios 1 and 2, where the earth-you is disintegrated, well, you're dead. But not to worry! The normal downsides of death (pain, in... (read more)

As a whole, I find your intuition of a good future similar to my intuition of a good future, but I do think that once it is examined more closely there are a few holes worth considering. I'll start by listing the details I strongly agree with, then the ones I am unsure of, and then the ones I strongly disagree with. 

Strongly Agree

  • It makes sense for humans to modify their memories and potentially even their cognitive abilities depending on the circumstance. The example provided of a worldbuilder sealing off their memories to properly enjoy their world
... (read more)
1Michael Soareverix
Thanks! I think I can address a few of your points with my thoughts. (Also, I don't know how to format a quote so I'll just use quotation marks) "It seems inefficient for this person to be disconnected from the rest of humanity and especially from "god". In fact, the AI seems like it's too small of an influence on the viewpoint character's life." The character has chosen to partially disconnect themselves from the AI superintelligence because they want to have a sense of agency, which the AI respects. It's definitely inefficient, but that is kind of the point. The AI has a very subtle presence that isn't noticeable, but it will intervene if a threshold is going to be crossed. Some people, including myself, instinctively dislike the idea of an AI controlling all of our actions and would like to operate as independently as possible from it. "The worlds with maximized pleasure settings sound a little dangerous and potentially wirehead-y. A properly aligned AGI probably would frown on wireheading." I agree. I imagine that these worlds have some boundary conditions. Notably, the pleasure isn't addictive (once you're removed from it, you remember it being amazing but don't feel an urge to necessarily go back) and there are predefined limits, either set by the people in them or by the AI. I imagine a lot of variation in these worlds, like a world where your sense of touch is extremely heightened and turned into pleasure and you can wander through feeling all sorts of ecstatic textures. "If you create a simulated world where simulated beings are real and have rights, that simulation becomes either less ethical or less optimized for your utility. Simulated beings should either be props without qualia or granted just as much power as the "real" beings if the universe is to be truly fair." The simulation that the character has built (the one I intend to build) has a lot of real people in it. When those people 'die', they go back to the real world and can choose to be re

This post was engaging enough to read in full, which I consider to be fairly high praise.

However, I think that it's lacking in some respects, namely:

  • I don't really see a central point or theme other than "What if creation myths were actually aliens?" which isn't enough to justify something of this length on its own. Unless the narrator is meant to be wrong about humanity, in which case it needs to be signaled more clearly. 
  • Quite a bit of the narrative is full of details and information that don't seem to contribute apart from being a reference to anci
... (read more)
2Alex Beyman
I appreciate your readership and insights. Some of these challenges have answers, some were just oversights on my part. 1. The central theme was about having the courage to reject an all powerful authority on moral grounds even if it means eternal torment, rather than endlessly rationalize and defend its wrongdoing out of fear. "Are you a totalitarian follower who receives morality from authority figures or are you able to determine right and wrong on your own despite pressure to conform" is the real moral test of the Bible, in this story, rather than being a test of obedience and self denial.  2. Many ancient cultures have myths about intelligent reptiles or sea people who taught them mathematics, astronomy and medicine, as well as what could be construed as UFOs. It isn't necessary to the plot, you're right of course, but it's there for world building. 3. The breeding pair brought to the habitat had their memories erased. I intended this to mean they were reverted to a nearly feral state, but I suppose it's still in question how much they would forget, if they did not forget language and need to reinvent it. This could probably have used more thought.  

I've read this post three times through and I still find it confusing. Perhaps it would be most helpful to say the parts I do understand and agree with, then proceed from there. I agree that the information available to hirers about candidates is relatively small, and that the future in general is complicated and chaotic.

I suppose the root of my confusion is this: won't a long-term extrapolation of a candidate's performance just magnify any inaccuracies that the hirer has mistakenly inferred from what they already know about the candidate? Isn't the most a... (read more)

1intellectronica
Thanks a lot for the helpful feedback! I completely agree that the risk is trying to invent magical information about the future by inferring from what we know in the present. The reason I find this format helpful (and hoped the reader will too) is that to me it highlights the absurdity of trying to make a prediction based on the limited, currently available information, and as a result I'm more likely to take a rational approach - starting with the base rate (rather than biasing on current information that is unlikely to be all that relevant for the future), making very small corrections based on the information I have (instead of the very bold predictions the people often make) and make uncertainty explicit in the process. I can see that it's possible to go through this process making the same mistakes you'd do if you started from the beginning, but I find it harder. If you haven't yet, give it a try (by using this end-to-beginning process) when you have a low-certainty / high-stakes decision to make, maybe you'll find that it works for you too.

While one's experience and upbringing are highly impactful on their current mental state, they are not unique in that regard. There are a great number of factors that lead to someone being what they are at a particular time, including their genetics, their birth conditions, the health of their mother during pregnancy, and so on. It seems to me that the claim that "everyone is the same but experiencing life from a different angle" is not really saying much at all, because the scope of the differences two "angles" may have is not bounded. You come to the sam... (read more)

Consider the following thought experiment: You discover that you've just been placed into a simulation, and that every night at midnight you are copied and deleted instantaneously, and in the next instant your copy is created where the original once was. Existentially terrified, you go on an alcohol and sugary treat binge, not caring about the next day. After all, it's your copy who has to suffer the consequences, right? Eventually you fall asleep. 

The next day you wake up hungover as all hell. After a few hours of recuperation, you consider what has ... (read more)

Shamash350

Shortly after the Dagger of Detect Evil became available to the public, Wiz's sales of the Dagger of Glowing Red skyrocketed.

There are a few ways to look at the question, but by my reasoning, none of them result in the answer "literally infinite."

From a deterministic point of view, the answer is zero degrees of freedom, because whatever choice the human "makes" is the only possible choice he/she could be making. 

From the perspective of treating decision-making as a black box which issues commands to the body, the amount of commands that the body can physically comply with is limited. Humans only have a certain, finite quantity of nerve cells to issue these commands with and through. Therefore, the set of commands that can be sent through these nerves at any given time must also be finite.

2gbear605
True, without a source of randomness there are technically finite states that a human brain can decide on. So I suppose it’s not literally infinite, but it still gets us to 2^(number of neurons in a brain), which is many more states than a human brain could experience in the lifetime of the universe. Of course, many of those states are fundamentally broken and would just look like a seizure, so perhaps all of those should be reduced together.
Shamash130

While I am not technically a "New User" in the context of the age of my account, I comment very infrequently, and I've never made a forum-level post. 

I would rate my own rationality skills and knowledge at slightly above the average person but below the average active LessWrong member. While I am aware that I possess many habits and biases that reduce the quality of my written content, I have the sincere goal of becoming a better rationalist. 

There are times when I am unsure whether an argument or claim that seems incorrect is flawed or if it is ... (read more)

I think there's a real danger of that, in practice.

But I've had lots of experience with "my style of moderation/my standards" being actively good for people taking their first steps toward this brand of rationalism; lots of people have explicitly reached out to me to say that e.g. my FB wall allowed them to do just those sorts of first, flawed steps.

A big part of this is "if the standards are more generally held, then there's more room for each individual bend-of-the-rules."  I personally can spend more spoons responding positively and cooperatively t... (read more)

When I brought up Atlantis, I was thinking of a version populated by humans, like in the Disney film. I now realize that I should have made this clear, because there are a lot of depictions of Atlantis in fiction and many of them are not inhabited by humans. To resolve this issue, I'll use Shangri-La as an example of an ostensibly hidden group of humans with advanced technology instead. 

To further establish distinct terms, let Known Humans be the category of humanity (homo sapiens) that publicly exists and is known to us. Let Unknown Humans be the cat... (read more)

Answer by Shamash60

Let's say we ignore mundane explanations like meteorological phenomena, secret military tech developed by known governments, and weather balloons. Even in that case, why jump to extraterrestrial life?

Consider, say, the possibility that these UFOs are from the hyper-advanced hidden underwater civilization of Atlantis. Sure, this is outlandish. But I'd argue that it's at least as likely as an extraterrestrial origin. We know that humans exist, we know that Atlantis would be within flying distance, there are reasonable explanations for why Atlantis would wan... (read more)

2Michaël Trazzi
Sure, in this scenario I think "Atlantis" would count as "aliens" somehow. Anything that is not from 2021 humans really, like even humans who started their own private lab in the forest in 1900 and discovered new tech are "not part of humanity's knowledge". It's maybe worth distinguishing between "humans in 2021", "homo sapiens originated civilization not from 2021", "Earth but not homo sapiens" (eg Atlantis) and extraterrestrial life (aka "aliens"). As for why we should jump to alien civilizations being on Earth, there are arguments on how a sufficiently advanced civilization could go for a fast space colonization. Other answers to the femi paradox even consider alien civilization to be around the corner but just inactive, and in that case one might consider that humans reaching some level of technological advancement might trigger some defense mechanism? I agree that this might fall into the conjunction fallacy and we may want to reject it using Occam's razor. However, I found the "inactive" theory one of the most "first principle answer to Fermi's paradox" out there, so the "defense mechanism" scenario might be worth considering (it's at least more reasonable than aliens visiting from another galaxy). I guess there's also the unkown unknowns about how laws of physics work–we've only been considering the limits to speed being the speed of light for less than a century, so we might find ways of bypassing it (eg with worm woles) before the end of the universe.

Could you elaborate on what exactly you mean by many worlds QM? From what I understand, this idea seems only to have relevance in the context of observing the state of quantum particles. Unless we start making macro-level decisions about how to act through Schrodinger's Cat scenarios, isn't many worlds QM irrelevant?

2Pattern
How they might be different from a 'single world situation': * Quantum effects have some bearing on computation, or can produce 'strange probabilistic effects'. * 'How do these quantum computations work? How are they so powerful? The answer to this question might be important' How they might be the same: * Expected value matters. Not just in expectation, but 'there's a world for that' (for the correct distribution). Real world applications I've heard of: * quantum pseudo-telepathy*, * counterfactual computation * transmissions that can't be intercepted (or break if they are observed) - some sort of quantum security. * Changing the way we see information * A new, (much better than classical) quantum algorithm is designed/discovered. Then a better classical algorithm is proposed based on it that makes up for (a lot of) the gap. * Better/cheaper randomness? * Changing the way we think about information/computation/physics/math/probability *This one uses measuring entangled particles. Maybe if you condition actions based on a quantum source of randomness that changes what happens in the multiverse relative to a deterministic protocol.
1Quintin Pope
Standard quantum mechanics models small, unobserved quantum systems as probability distributions over possible observable values, meaning there's no function that gives you a particle's exact momentum at a given time. Instead, there's a function that gives you a probability distribution over possible momentum values at a given time. Every modern interpretation of quantum mechanics predicts nearly the same probability distributions for every quantum system.  Many worlds QM argues that, just as small, unobserved quantum systems are fundamentally probabilistic, so too is the wider universe. Under many worlds, there exists a universal probability distribution over states of the universe. Different "worlds" in the many worlds interpretation equate to different configurations of the observable universe. If many worlds is true, it implies there are alternate versions of ourselves who we can't communicate with. However, the actions that best improve humanity's prospects in a many worlds situations may be different from the best actions in a single world situation.

Is AGI even something that should be invested in on the free market? The nature of most financial investments is that individuals expect a return on their investment. I may be wrong, but I can't really envision a friendly AGI being created with the purpose of creating financial value for its investors. I mean, sure, technically if friendly AGI is created the investors will almost certainly benefit regardless because the world will become a better place, but this could only be considered an investment in a rather loose sense. Investing in AGI won't provide any significant returns until AGI is created, and at that point it is likely that stock ownership will not matter. 

1TekhneMakre
It's also possible that investing not in "what's most likely to make AGI" but "what's most likely to make me money before AGI based on market speculation" is throwing fuel on the ungrounded-speculation bonfire. Which attracts sociopaths rather than geeks. Which cripples real AGI efforts. Which is good. (Not endorsing this, just making a hypothesis.)
Shamash150

I'm a gay cis male, so I thought that the author and/or other members of this forum might find my perspective on the topic interesting. 

The confusion between finding someone sexually attractive and wishing you had their body is common enough in the online gay community to earn its own nickname: jealusty. It seems that this is essentially the gay version of autogynephilia, in a sense. As I read the blog post, I briefly wondered whether fantasies of a better body could contribute to homosexuality somehow, but that doesn't really fit the pattern you pres... (read more)

I try to do a lot of research on autogynephilia and related topics, and I think there's some things that are worth noting:

  1. Autogynephilia appears to be fairly rare in the general population of males; I usually say 3%-15%, though it varies from study to study depending on hard-to-figure-out things. My go-to references for prevalence rates are this and this paper. (And this is for much weaker degrees of autogynephilia than Zack's.) So it's not just about having a body that one finds attractive, there needs to be some ?other? factor before one ends up autogyne
... (read more)

It seems to me that compromise isn't actually what you're talking about here. An individual can have strongly black-and-white and extreme positions on an issue and still be good at making compromises. When a rational agent agrees to compromise, this just implies that the agent sees the path of compromise as the most likely to achieve their goals. 

For example, let's say that Adam slightly values apples (U = 1) and strongly values bananas (U = 2), while Stacy slightly values bananas (U=1) and strongly values apples (U=2). Assume these are their only val... (read more)

This seems like it could be a useful methodology to adopt, though I'm not sure it would be helpful for everyone. In particular, for people who are prone to negative rumination or self-blame, the answer to these kinds of questions will often be highly warped or irrational, reinforcing the negative thought patterns. Such a person could also come up with a way they could improve their life, fail to implement it, and then feel guilty when their reality fails to measure up to their imagined future. 

On the other hand, I'm no psychotherapist, so it may just ... (read more)

3Neel Nanda
As a single point of anecdata, I personally am fairly prone to negative thoughts and self-blame, and find this super helpful for overcoming that. My Inner Simulator seems to be much better grounded than my spirals of anxiety, and not prone to the same biases. Some examples: I'm stressing out about a tiny mistake I made, and am afraid that a friend of mine will blame me for it. So I simulate having the friend find out and get angry with me about it, and ask myself 'am I surprised at this outcome'. And discover that yes, I am very surprised by this outcome - that would be completely out of character and would feel unreasonable to me in the moment. I have an upcoming conversation with someone new and interesting, and I'm feeling insecure about my ability to make good first impressions. I simulate the conversation happening, and leaving feeling like it went super well, and check how surprised I feel. And discover that I don't feel surprised, that in fact that this happens reasonably often. This seems like a potentially fair point. I sometimes encounter this problem. Though I find that my Inner Sim is a fair bit better calibrated about what solutions might actually work. Eg it has a much better sense for 'I'll just procrastinate and forget about this'. On balance, I find that the benefits of 'sometimes having a great idea that works' + the motivation to implement it far outweighs this failure mode, but your mileage may vary.

I'm not sure it's actually useful, but I feel like I should introduce myself as an individual with Type 1 Narcolepsy. I might dispute the claim that depression and obesity are "symptoms" of narcolepsy (understanding, of course, that this was not the focus of your post) because I think it would be more accurate to call them comorbid conditions.

The use of the term "symptom" is not necessarily incorrect, it could be justified by some definitions, but it tends to refer to sensations subjectively experienced by an individual. For example, if you get the flu, yo... (read more)

2jayterwahl
Fair critique! Changed. 

The point is that in this scenario, the tornado does not occur unless the butterfly flaps its wings. That does not apply to "everything", necessarily, it only applies to other things which must exist for the tornado to occur. 

Probability is an abstraction in a deterministic universe (and, as I said above, the butterfly effect doesn't apply to a nondeterministic universe.) The perfectly accurate deterministic simulator doesn't use probability, because in a deterministic universe there is only one possible outcome given a set of initial conditions. The ... (read more)

1ForensicOceanography
I see, but you are talking about an extremely idiosyncratic measure (only two points) on the space of initial conditions. One could as easily find another couple of initial conditions, in which the wing flip prevents the tornado. If there were a prediction market on tornadoes, its estimations should not change in neither direction after observing the butterfly. Phrased this way it is obviously true.     However, why are you saying that chaos requires determinism? I can think of some Markovian master equations with quite a chaotic behavior.

Imagine a hundred trillion butterflies that each flap their wings in one synchronized movement, generating a massive gust of wind which is strong enough to topple buildings flatten mountains. If they were positioned correctly, they'd probably also be able to create a tornado that would not have occurred if the butterflies were not there flapping their wings, just by pushing air currents into place. Would that tornado be "caused" by the butterflies? I think most people would answer yes. If the swarm had not performed their mighty flap, the tornado would not... (read more)

2ForensicOceanography
Hi, I think I see what you mean. You can certainly say that the flap, as a part of the initial conditions, is part of the causes of the tornado. But this is true in the same sense in which all of the initial conditions are part of the cause of the tornado. The flap caused the tornado together with everything else. All the initial ocean temperatures, the position of the jet streams, the northern annular mode index, everything. If everything is the cause, then "being the cause of the tornado" is a property which carries exactly 0 bits of information, since everything is the cause. I prefer to think that an event A "caused" another event B if the probability of B, conditioned on A happening, is at least greater than the prior probability of A.

From what I've read, the hormone Oxytocin appears to be behind many of the emotions people generally describe as "spiritual". While the hormone is still being studied, there is evidence that indicates it can increase feelings of connection to entities larger than the self, increase feelings of love and trust with others, and promote feelings of belonging in groups.

The emotion of elevation, which appears to be linked to oxytocin, is most often caused by witnessing other people do altruistic or morally agreeable actions. This may explain the tendency for man... (read more)

Shamash-10

I would guess that one reason this containment method has not been seriously considered is because the amount of detail in a simulation required for the AI to be able to do anything that we find useful is so far beyond our current capabilities that it doesn't seem worth considering. The case you present of an exact copy of our earth would require a ridiculous amount of processing power at the very least, and consider that the simulation of billions of human brains in this copy would already constitute a form of GAI. A simulation with less detail would be c... (read more)

2jacob_cannell
Actually It is trivially easy to contain an AI in a sim, as long as it grows up in the sim. It's sensory systems will then only recognize the sim physics as real. You are incorrectly projecting your own sensory system onto the AI - comparing it to your personal experiences with games or sim worlds. In fact it doesn't matter how 'realistic' the sim is from our perspective. AI could be grown in cartoon worlds or even purely text based worlds, and in either case would have no more reason to believe it is in a sim then you or I.
4HumaneAutomation
So... can it be said that the advent of an AGI will also provide a satisfactory answer to the question whether we currently are in a simulation? That is what you (and avturchin) seem to imply. Also, this stance presupposes that: - an AGI can ascertain such observations to be highly probable/certain; - it is theoretically possible to find out the true nature of ones world (and that a super-intelligent AI would be able to do this); - it will inevitably embark on a quest to ascertain the nature and fundamental facts about its reality; - we can expect a "question absolutely everything" attitude from an AGI (something that is not necessarily desirable, especially in matters where facts may be hard to come by/a matter of choice or preference). Or am I actually missing something here? I am assuming that is very probable ;)

A possible future of AGI occurred to me today and I'm curious if it's plausible enough to be worth considering. Imagine that we have created a friendly AGI that is superintelligent and well-aligned to benefit humans. It has obtained enough power to prevent the creation of other AI, or at least the potential of other AI from obtaining resources, and does so with the aim of self-preservation so it can continue to benefit humanity.

So far, so good, right? Here comes the issue: this AGI includes within its core alignment functions some kind of restri... (read more)

2Alexei
Yeah many people think along these lines too, which is why many people talk about AI helping humanity flourish, and anything short of that is a bit of a catastrophe.

I think it would not be a very useful question to ask. What are the chances that a flawed, limited human brain could stumble upon the absolute optimal set of actions one should take, based on a given set of values? I can't concieve of a scenario where the oracle would say "Yes" to that question.