As a whole, I find your intuition of a good future similar to my intuition of a good future, but I do think that once it is examined more closely there are a few holes worth considering. I'll start by listing the details I strongly agree with, then the ones I am unsure of, and then the ones I strongly disagree with.
Strongly Agree
This post was engaging enough to read in full, which I consider to be fairly high praise.
However, I think that it's lacking in some respects, namely:
I've read this post three times through and I still find it confusing. Perhaps it would be most helpful to say the parts I do understand and agree with, then proceed from there. I agree that the information available to hirers about candidates is relatively small, and that the future in general is complicated and chaotic.
I suppose the root of my confusion is this: won't a long-term extrapolation of a candidate's performance just magnify any inaccuracies that the hirer has mistakenly inferred from what they already know about the candidate? Isn't the most a...
While one's experience and upbringing are highly impactful on their current mental state, they are not unique in that regard. There are a great number of factors that lead to someone being what they are at a particular time, including their genetics, their birth conditions, the health of their mother during pregnancy, and so on. It seems to me that the claim that "everyone is the same but experiencing life from a different angle" is not really saying much at all, because the scope of the differences two "angles" may have is not bounded. You come to the sam...
Consider the following thought experiment: You discover that you've just been placed into a simulation, and that every night at midnight you are copied and deleted instantaneously, and in the next instant your copy is created where the original once was. Existentially terrified, you go on an alcohol and sugary treat binge, not caring about the next day. After all, it's your copy who has to suffer the consequences, right? Eventually you fall asleep.
The next day you wake up hungover as all hell. After a few hours of recuperation, you consider what has ...
Shortly after the Dagger of Detect Evil became available to the public, Wiz's sales of the Dagger of Glowing Red skyrocketed.
There are a few ways to look at the question, but by my reasoning, none of them result in the answer "literally infinite."
From a deterministic point of view, the answer is zero degrees of freedom, because whatever choice the human "makes" is the only possible choice he/she could be making.
From the perspective of treating decision-making as a black box which issues commands to the body, the amount of commands that the body can physically comply with is limited. Humans only have a certain, finite quantity of nerve cells to issue these commands with and through. Therefore, the set of commands that can be sent through these nerves at any given time must also be finite.
While I am not technically a "New User" in the context of the age of my account, I comment very infrequently, and I've never made a forum-level post.
I would rate my own rationality skills and knowledge at slightly above the average person but below the average active LessWrong member. While I am aware that I possess many habits and biases that reduce the quality of my written content, I have the sincere goal of becoming a better rationalist.
There are times when I am unsure whether an argument or claim that seems incorrect is flawed or if it is ...
I think there's a real danger of that, in practice.
But I've had lots of experience with "my style of moderation/my standards" being actively good for people taking their first steps toward this brand of rationalism; lots of people have explicitly reached out to me to say that e.g. my FB wall allowed them to do just those sorts of first, flawed steps.
A big part of this is "if the standards are more generally held, then there's more room for each individual bend-of-the-rules." I personally can spend more spoons responding positively and cooperatively t...
When I brought up Atlantis, I was thinking of a version populated by humans, like in the Disney film. I now realize that I should have made this clear, because there are a lot of depictions of Atlantis in fiction and many of them are not inhabited by humans. To resolve this issue, I'll use Shangri-La as an example of an ostensibly hidden group of humans with advanced technology instead.
To further establish distinct terms, let Known Humans be the category of humanity (homo sapiens) that publicly exists and is known to us. Let Unknown Humans be the cat...
Let's say we ignore mundane explanations like meteorological phenomena, secret military tech developed by known governments, and weather balloons. Even in that case, why jump to extraterrestrial life?
Consider, say, the possibility that these UFOs are from the hyper-advanced hidden underwater civilization of Atlantis. Sure, this is outlandish. But I'd argue that it's at least as likely as an extraterrestrial origin. We know that humans exist, we know that Atlantis would be within flying distance, there are reasonable explanations for why Atlantis would wan...
Could you elaborate on what exactly you mean by many worlds QM? From what I understand, this idea seems only to have relevance in the context of observing the state of quantum particles. Unless we start making macro-level decisions about how to act through Schrodinger's Cat scenarios, isn't many worlds QM irrelevant?
Is AGI even something that should be invested in on the free market? The nature of most financial investments is that individuals expect a return on their investment. I may be wrong, but I can't really envision a friendly AGI being created with the purpose of creating financial value for its investors. I mean, sure, technically if friendly AGI is created the investors will almost certainly benefit regardless because the world will become a better place, but this could only be considered an investment in a rather loose sense. Investing in AGI won't provide any significant returns until AGI is created, and at that point it is likely that stock ownership will not matter.
I'm a gay cis male, so I thought that the author and/or other members of this forum might find my perspective on the topic interesting.
The confusion between finding someone sexually attractive and wishing you had their body is common enough in the online gay community to earn its own nickname: jealusty. It seems that this is essentially the gay version of autogynephilia, in a sense. As I read the blog post, I briefly wondered whether fantasies of a better body could contribute to homosexuality somehow, but that doesn't really fit the pattern you pres...
I try to do a lot of research on autogynephilia and related topics, and I think there's some things that are worth noting:
It seems to me that compromise isn't actually what you're talking about here. An individual can have strongly black-and-white and extreme positions on an issue and still be good at making compromises. When a rational agent agrees to compromise, this just implies that the agent sees the path of compromise as the most likely to achieve their goals.
For example, let's say that Adam slightly values apples (U = 1) and strongly values bananas (U = 2), while Stacy slightly values bananas (U=1) and strongly values apples (U=2). Assume these are their only val...
This seems like it could be a useful methodology to adopt, though I'm not sure it would be helpful for everyone. In particular, for people who are prone to negative rumination or self-blame, the answer to these kinds of questions will often be highly warped or irrational, reinforcing the negative thought patterns. Such a person could also come up with a way they could improve their life, fail to implement it, and then feel guilty when their reality fails to measure up to their imagined future.
On the other hand, I'm no psychotherapist, so it may just ...
I'm not sure it's actually useful, but I feel like I should introduce myself as an individual with Type 1 Narcolepsy. I might dispute the claim that depression and obesity are "symptoms" of narcolepsy (understanding, of course, that this was not the focus of your post) because I think it would be more accurate to call them comorbid conditions.
The use of the term "symptom" is not necessarily incorrect, it could be justified by some definitions, but it tends to refer to sensations subjectively experienced by an individual. For example, if you get the flu, yo...
The point is that in this scenario, the tornado does not occur unless the butterfly flaps its wings. That does not apply to "everything", necessarily, it only applies to other things which must exist for the tornado to occur.
Probability is an abstraction in a deterministic universe (and, as I said above, the butterfly effect doesn't apply to a nondeterministic universe.) The perfectly accurate deterministic simulator doesn't use probability, because in a deterministic universe there is only one possible outcome given a set of initial conditions. The ...
Imagine a hundred trillion butterflies that each flap their wings in one synchronized movement, generating a massive gust of wind which is strong enough to topple buildings flatten mountains. If they were positioned correctly, they'd probably also be able to create a tornado that would not have occurred if the butterflies were not there flapping their wings, just by pushing air currents into place. Would that tornado be "caused" by the butterflies? I think most people would answer yes. If the swarm had not performed their mighty flap, the tornado would not...
From what I've read, the hormone Oxytocin appears to be behind many of the emotions people generally describe as "spiritual". While the hormone is still being studied, there is evidence that indicates it can increase feelings of connection to entities larger than the self, increase feelings of love and trust with others, and promote feelings of belonging in groups.
The emotion of elevation, which appears to be linked to oxytocin, is most often caused by witnessing other people do altruistic or morally agreeable actions. This may explain the tendency for man...
I would guess that one reason this containment method has not been seriously considered is because the amount of detail in a simulation required for the AI to be able to do anything that we find useful is so far beyond our current capabilities that it doesn't seem worth considering. The case you present of an exact copy of our earth would require a ridiculous amount of processing power at the very least, and consider that the simulation of billions of human brains in this copy would already constitute a form of GAI. A simulation with less detail would be c...
A possible future of AGI occurred to me today and I'm curious if it's plausible enough to be worth considering. Imagine that we have created a friendly AGI that is superintelligent and well-aligned to benefit humans. It has obtained enough power to prevent the creation of other AI, or at least the potential of other AI from obtaining resources, and does so with the aim of self-preservation so it can continue to benefit humanity.
So far, so good, right? Here comes the issue: this AGI includes within its core alignment functions some kind of restri...
I think it would not be a very useful question to ask. What are the chances that a flawed, limited human brain could stumble upon the absolute optimal set of actions one should take, based on a given set of values? I can't concieve of a scenario where the oracle would say "Yes" to that question.
I think the simplest way to answer this is to introduce a new scenario. Let's call it Scenario 0. Scenario 0 is similar to Scenario 1, but in this case your body is not disintegrated. The result seems pretty clear: you are unaffected and continue living life on earth. Other yous may be living their own lives in space but it isn't as if there is some kind of metaphysical consciousness link that connects you to them.
And so, in scenarios 1 and 2, where the earth-you is disintegrated, well, you're dead. But not to worry! The normal downsides of death (pain, in... (read more)