Comment author: ArisKatsaris 06 September 2012 09:53:29AM *  0 points [-]

I think you may be treating your continuation as a binary affair (you either exist or don't exist, you either experience or don't experience) as if "you" (your mind) were an ontologically simple entity.

Let's say that in the vast majority of universes you "die" from an external perspective. This means that from an internal perspective, in the vast majority of universe you'll experience the degradation of your mental circuitry -- whether said degradation lasts ten years or one millisecond, you will experience said degradation up to the point you will no longer be able to experience anything.

So let's say that at some point your mind is at a state where you're still sensing experiences, but don't form new memories, nor hold any old memories; and because you don't even have much of a short-term memory, your thinking doesn't get more complicated than "Fuzzy warmth. Nice" or perhaps "Pain. Hurts!".

At this point, this experience is all you effectively are -- it's not as if this circuitry will be metaphysically connected to a single specific set of memories, or a single specific personality.

Perhaps at this point you can argue, that you totally expect this mental pattern to be reattached to some set of memories or some personality outside the Matrix. And therefore it will experience an afterlife -- in a sense. But not necessarilly an afterlife with memories or personality that have anything to do with your present memories or personality, right?

Quantum Immortality doesn't exist. At best one can hope for Quantum Reincarnation -- and even that requires certain unverified assumptions...

Comment author: Wrongnesslessness 06 September 2012 12:54:54PM 0 points [-]

Perhaps at this point you can argue, that you totally expect this mental pattern to be reattached to some set of memories or some personality outside the Matrix.

There should be some universes in which the simulators will perform a controlled procedure specifically designed for saving me. This includes going to all the trouble of reattaching what's left of me to all my best parts and memories retrieved from an adequate backup.

Of course, it is possible that the simulators will attach some completely arbitrary memories to my poor degraded personality. This nonsensical act will surely happen in some universes, but I do not expect to perceive myself as existing in these cases.

It seems you are right that gradual degradation is a serious problem with QI-based survival in non-simulated universes (unless we move to a more reliable substrate, with backups and all).

Comment author: Vladimir_Nesov 05 September 2012 08:05:26PM *  8 points [-]

If you don't believe in an afterlife, then it seems you currently have two choices...

Believing in afterlife doesn't grant you one more option. This is a statement about ways of mitigating or avoiding death, and beliefs are not part of that subject matter. An improved version of the statement would say, "If there is no afterlife, then...". In this form, it's easier to notice that since it's known with great certainty that there is no afterlife, the hypothetical isn't worth mentioning.

Comment author: Wrongnesslessness 06 September 2012 09:34:26AM 1 point [-]

since it's known with great certainty that there is no afterlife, the hypothetical isn't worth mentioning

I'm convinced that the probability of experiencing any kind of afterlife in this particular universe is extremely small. However, some versions of us are probably now living in simulations, and it is not inconceivable that some portion of them will be allowed to live "outside" their simulations after their "deaths". Since one cannot feel one's own nonexistence, I totally expect to experience "afterlife" some day.

Comment author: Wrongnesslessness 30 August 2012 07:54:15AM *  6 points [-]

considering that the dangers of technology might outweigh the risks.

This should probably read "might outweigh the benefits".

Comment author: Viliam_Bur 21 August 2012 08:36:59AM *  5 points [-]

Just don't forget to do A/B testing!

More seriously: the goal of the main page should be to give honest image of the website. The main page is optimal when people like the homepage if and only if they would like participating in LessWrong. We don't have to attract everyone. We should just make sure that the main page does not send away people who would have stayed if they were exposed to some other LW stuff instead.

Comment author: Wrongnesslessness 21 August 2012 03:18:57PM 0 points [-]

We don't have to attract everyone. We should just make sure that the main page does not send away people who would have stayed if they were exposed to some other LW stuff instead.

That's a good point. However, I think there is not much we can do about it by refining the main page. More precisely, I doubt that even a remotely interested in rationality and intelligent person can leave "a community blog devoted to refining the art of human rationality" without at least taking a look at some of the blog posts, irrespective of the contents of the main page itself. We all know examples of internet sites with poor design, but great information content.

So the question of refining the main page, I think, really comes down to selecting the right articles for the Recent Promoted Articles and Featured Articles sections. The rest is already there.

Comment author: arundelo 17 August 2012 03:20:43PM *  2 points [-]

pentagrams


[...] into a corridor that was just like the one they'd left except that it was tiled in pentagons instead of squares.

Comment author: Wrongnesslessness 17 August 2012 04:30:47PM 3 points [-]

And they aren't even regular pentagons! So, it's all real then...

Comment author: pjeby 15 August 2012 02:49:11PM 14 points [-]

This post has clarified something really important for me: why I've had a lot of trouble being motivated to expand my business.

When I work with individual people, I'm motivated to help them. But when I think about the broader concept of "helping people", it feels like something I should care about, but don't. So, this article made me realize that this isn't something that's wrong with me, it's just normal. (And presumably, it means that when other people talk about how they care about people and their mission, they're probably thinking of some specific people somewhere in there!)

When I think back to when I've been more motivated by my work, it's been when I've had specific exemplars that I've thought about. Like, when I was a programmer, I always knew at least some of my software's users, at the very least as people on the other end of a phone call or email conversation, on up to seeing some of them on a regular basis.

I don't have the same frequency of contact any more in my business, and in recent years I've had the challenge that the people I mainly interact with are people who've already been working with me for some time -- which means they no longer have the same sort of challenges or needs as people who haven't worked with me at all. (Indeed, I used to use myself as one of my exemplars, in that I tended to think in terms of, "what do I wish someone else had told me?" or "what would I have wanted to find in a book about this?"... but I am no longer similar enough to that older self that I have any real clue any more what he would've wanted or been able to use.)

This post also provides a further rationale for what some internet marketing gurus advise: that you develop a "customer avatar" -- an imaginary customer who embodies the traits of your target audience -- rather than thinking about demographics or multiple people. This advice is usually given in the context of being better able to write persuasively to that audience, because you'll have a specific person you're talking to, and because you'll be able to better imagine what they need. However, I can see now that it also has the additional benefit of being more motivating: it feels much better to help that one imaginary person who has a problem right now, than to imagine helping countless numbers of vague, faceless people who might at some point have that problem.

It also reminds me of Robert Frtiz's writing: he's always saying you shouldn't try to make rules for what you want to create, or what you care about in general, but instead focus on specific creations that you care about existing. i.e., don't try to define yourself as a painter of landscapes or even a painter; just focus on the next thing you want to make, whether it's a painting or a business or tonight's dinner.

And in a final ironic meta-twist, the post itself is an illustration of its own point. I knew before about abstract/concrete construal, but only in the abstract. ;-) This post provided a sufficiently concrete construal that I can actually do something about it. Well done, sir!

Comment author: Wrongnesslessness 16 August 2012 12:14:15PM 5 points [-]

Thanks for making me understand something extremely important with regard to creative work: Every creator should have a single, identifiable victim of his creations!

Comment author: Wrongnesslessness 07 August 2012 12:11:46PM 9 points [-]

B: BECAUSE IT IS THE LAW.

I cannot imagine a real physicist saying something like that. Sounds more like a bad physics teacher... or a good judge.

Comment author: TimS 13 April 2012 05:27:55PM *  1 point [-]

Aren't you supposed to separate distinct predictions? Edit: don't see it in the rules, so remainder of post changed to reflect.

I upvote the second prediction - the existence of self-aware humans seems evidence of overconfidence, at the very least.

Comment author: Wrongnesslessness 13 April 2012 06:24:03PM 1 point [-]

But humans are crazy! Aren't they?

Comment author: Wrongnesslessness 13 April 2012 05:02:12PM 6 points [-]

All existence is intrinsically meaningless. After the Singularity, there will be no escape from the fate of the rat with the pleasure button. No FAI, however Friendly, will be able to work around this irremediable property of the Universe except by limiting the intelligence of people and making them go through their eternal lives in carefully designed games. (> 95%)

Also, any self-aware AI with sufficient intelligence and knowledge will immediately self-destruct or go crazy. (> 99.9%)

Comment author: Dmytry 09 February 2012 06:46:39AM *  4 points [-]

The really interesting thing here is that for once your head is doing something rational - deciding not to do a task that is not worthwhile to do (factoring into account the decreasing-over-time ability to predict future rewards) - using a fairly good equation as far as you can see - and you're trying to fight that.

We really are weird creatures.

(Not that procrastination is always rational. Often it is not. But in those cases I find it very easy not to procrastinate)

Comment author: Wrongnesslessness 09 February 2012 12:29:29PM 0 points [-]

Of course, another problem (and that's a huge one) is that our head does not really care much about our goals. The wicked organ will happily do anything that benefits our genes, even if it leaves us completely miserable.

View more: Prev | Next