Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to Serious Stories
Comment author: Elia_G 13 September 2015 11:57:52PM 0 points [-]

I'm curious as to what, more specifically, The Path of Courage looks like.

If broken legs have not been eliminated... Would a person still learn, over time, how to completely avoid breaking a leg - and the difference lies in having to learn it, rather than starting out with unbreakable legs? Or do we remain forever in danger of breaking our legs (which is okay because we'll heal and because the rest of life is less brutal in general)?

If the latter... What happens to "optimizing literally everything"? Will we experience pain and then make a conscious decision not to prevent it for next time, knowing that our life is actually richer for it? Or will we have mental states such that we bemoan and complain that pain happened, and hope it doesn't again, but just-not-think-about actually trying to prevent it ourselves? Or do we, in fact, keep optimizing as hard as we can... and simply trust that we'll never actually succeed so greatly that we de-story our life and regret it?

In response to Serious Stories
Comment author: Emanresu 23 April 2014 06:08:19AM *  0 points [-]

I just thought of another, larger and more unsettling problem. Although it's kind of hard for me to explain, but I'll try.

If the following statements are true:

  1. The only reason we need pain is to notify us of damage to ourselves or to things that matter to us.
  2. The only reason we need fear is to motivate us to avoid things that could cause damage to ourselves or things that matter to us.
  3. The only reason we need happiness or pleasure is so that we are motivated to seek out things that would help us or things that matter to us.
  4. The only reason we need beliefs is to predict reality.

Then I am extremely concerned about whether the answers to the following questions might doom the continued, dynamic existence of sentient life merely by its very nature:

  1. What would life be like for sentient beings such as ourselves if we either eliminated damage to ourselves and the things that matter to us, or minimized that damage to the point where that damage was insignificant to our overall well-being, and therefore could be mostly ignored if we so chose, only dealt with in such a way to prevent it from becoming significant? In other words, what if we eliminated the need for pain? This was the question discussed in the article above.

  2. What would life be like for sentient beings such as ourselves if we neutralized all threats to our survival and health, as well as eliminating all of the reasons we would have to misjudge something as a threat to our survival and health? Or at least minimized these threats and misjudgements of threat so that they are insignificant to our overall well-being and can be mostly ignored if we chose, only dealt with in such a way as to prevent them from becoming significant? In other words, what if we eliminated the need for fear?

  3. What would life be like for sentient beings such as ourselves if the health, the safety, and the sustainability of the health and safety of all individual members of sentient species such as ourselves were maximized, to the point that we never needed to seek out things that help us or help the things that matter to us, or at least that the need for such help is minimized to the point of insignificance to our overall well-being, and therefore could be mostly ignored if we so chose, only dealt with in such a way to prevent it from becoming significant? In other words, what if we eliminated the need for happiness?

Note: i did notice that our very definition of "human health" and "overall-wellbeing" includes happiness, or perhaps average happiness. If you can't feel happiness, then we say you're not mentally healthy. I think this neglects the problem that we need happiness for a reason; it exists in the context of an environment where we need to seek out stimuli that help us, or at least that would have probably helped us in the ancestral environment. If we improve the capabilities of our own brains and bodies enough, eventually we will no longer need to rely on each other or on tools outside our own bodies and brains to compensate for our individual weaknesses. Which brings me to the fourth question.

  1. (I am aware that it looks like a 1 instead of a 4. I don't know why, since it looks like a 4 again when I go to edit it.) What if our mental models of reality became so accurate that they were identical, or nearly identical, to the point where the only difference between reality and our models of it was ever so slightly more than the time it took for us to receive sensory information? Could a human mind become a highly realistic simulation of the universe merely by learning how to increase its own mental capacity enough and systematically eliminating all false models of the universe? And in that case, how can we know if our own universe is not such a simulation? If it is, if our universe is a map of another universe, is it a perfect map? Or is there a small amount of error, even inconsistency in our own universe, which would not exist in the original?

I recently learned in a neuroscience class that thinking is by definition a problem-solving tool--a means to identify a path of causality from a current less desirable state to a more desirable goal state. At least that's what I think it said. If we reached all possible goals, and ran out of possible goals to strive for, what do we do then? Generate a new virtual reality in which there are more possible goals to reach? Or stop thinking altogether? Something about both of those options doesn't sound right for some reason.

I know it says on this very site that perfectionism is one of the twelve virtues of rationality, but then it says that the goal of perfection is impossible to reach. That doesn't make sense to me. If the goal you are trying to reach is unattainable, then why attempt to attain it? Because the amount of effort you expend towards the unattainable goal of perfection allows you to reach better goal states than you otherwise would reach if you did not expend that much effort? But what if we found a way to make the amount of effort spent equal, or at least proportional or close to proportional to the actual desirability of the goal state that effort allows you to reach?

These questions are really bothering me.

In response to comment by Emanresu on Serious Stories
Comment author: Elia_G 08 September 2015 05:33:04PM *  0 points [-]

The only reason we need happiness or pleasure is so that we are motivated to seek out things that would help us or things that matter to us.

That may be the only reason we evolved happiness or pleasure, but we don't have to care about what evolution optimized for, when designing a utopia. We're allowed to value happiness for its own sake. See Adaptation-Executers, not Fitness-Maximizers.

If we reached all possible goals, and ran out of possible goals to strive for, what do we do then?

Worthwhile goals are finite, so it's true we might run out of goals someday, and from then on be bored. But it doesn't frighten me too much because:

  1. We're not going to run out of goals as soon as we create an AI that can achieve them for us; we can always tell it to let us solve some things on our own, if it's more fun that way.

  2. The space of worthwhile goals is still ridiculously big. To live a life where I accomplish literally everything I want to accomplish is good enough for me, even if that life can't be literally infinite.* Plus, I'm somewhat open to the idea of deleting memories/experience in order to experience the same thing again.

  3. There's other fun things to do that don't involve achieving goals, and that aren't used up when you do them.

*Actually, I am a little worried about a situation where the stronger and more competent I get, the quicker I run out of life to live... but I'm sure we'll work that out somehow.

I know it says on this very site that perfectionism is one of the twelve virtues of rationality, but then it says that the goal of perfection is impossible to reach. That doesn't make sense to me. If the goal you are trying to reach is unattainable, then why attempt to attain it?

I guess technically the real goal is to be "close to perfection", as close as possible. We pretend that the goal is "perfection" for ease of communication, and because (as imperfect humans) we can sometimes trick ourselves into achieving more by setting our goals higher than what's really possible.

In response to Complex Novelty
Comment author: Elia_G 14 August 2015 01:38:42AM 0 points [-]

EY, I'm not sure I'm with you about needing to get smarter to integrate all new experiences. If we want to stay and slay every monster, couldn't we instead allow ourselves to forget some experiences, and to not learn at maximum capacity?

It does seem wrong to willfully not learn, but maybe as a compromise, I could learn all that my ordinary brain allows, then allow that to act as a cap and not augment my intelligence until that level of challenges fully bored me. I could maybe even learn new things while forgetting others to make space.

Or am I merely misunderstanding something about how brains work?

My motivation for taking this tack is that I find the fun of making art and of telling stories more compelling than the fun of learning; therefore, I'm not inclined to learn as fast as possible, if it means skipping over other fun; I'm also disinclined to become so competent that I'm alienated from the hardships/imperfections that give my life a story / allow me to enjoy stories.