Comment author: Alicorn 23 April 2009 10:49:02PM *  2 points [-]

I'm not signed up for cryonics. Partly, this is because I'm poor. Partly, it's because I'm extremely risk-averse and I can imagine really really horrible outcomes of being frozen just as easily as I can imagine really really great outcomes - in the absence of people walking around who were frozen and awakened later, my imaginings are all the data I have.

I'm sorry for your loss and that of your girlfriend, and I wish her grandfather had not died. While I'm at it, I'll wish he'd been immortal. But there are two mistaken responses to the fact that human beings die: one is to tout death as a natural and possibly even positive part of the human condition, and one is to find excuses not to deal with it when it happens. Theism with an afterlife is the first thing; freezing the dead person is the second.

In all likelihood, if and when I stop being poor, my bet and the money behind it is going to be on medicine, and maybe uploads of living people if there are very promising projects going on by then.

Comment author: Lawliet 23 April 2009 11:08:58PM *  4 points [-]

By "extremely risk-averse" do you mean "working hard to maximise persistence odds" or "very scared of scary scenarios"?

You're right that death while signed up for cryonics is still a very bad thing, though. I don't think Eliezer would be fine with deaths if they were signed up, but sometimes he makes it seem that way.

Comment author: jscn 20 April 2009 07:54:33PM 1 point [-]

I've found the work of Stefan Molyneux to be very insightful with regards to this (his other work has also been pretty influential for me).

You can find his books for free here. I haven't actually read his book on this specific topic ("Real-Time Relationships: The Logic of Love") since I was following his podcasting and forums pretty closely while he was working up to writing it.

Comment author: Lawliet 21 April 2009 03:51:29AM 0 points [-]

Do you think you could summarise it for everybody in a post?

Comment author: Lawliet 20 April 2009 02:02:15AM 4 points [-]

I'd be interested in reading (but not writing) a post about rationalist relationships, specifically the interplay of manipulation, honesty and respect.

Seems more like a group chat than a post, but let's see what you all think.

Comment author: dreeves 18 April 2009 11:37:38PM 2 points [-]

Great questions. It's just honor system for now. That's not necessarily crazy though. I mean, why do people on ebay actually send the goods after they get paid?

Comment author: Lawliet 18 April 2009 11:50:47PM 0 points [-]

I would upvote this because it's important that you answered the question and I don't want to discourage that, but I don't want to imply that I like your honor system solution.

Comment author: outlawpoet 17 April 2009 11:15:38PM 1 point [-]

Well, that's an interesting question. If you wanted to just feel maximum happiness in a something like your own mind, you could take the strongest dopamine and norepinephrin reuptake inhibitors you could find.

If you didn't care about your current state, you could get creative, opioids to get everything else out of the way, psychostimulants, deliriants. I would need to think about it, I don't think anyone has ever really worked out all the interactions. It would be easy to achieve a extremely high bliss, but some interactions work would be required to figure out something like a theoretical maximum.

The primary thing in the way is the fact that even if you could find a way to prevent physical dependency, the subject would be hopelessly psychologically addicted, unable to function afterwards. You'd need to stably keep them there for the rest of their life expectancy, you couldn't expect them to take any actions or move in and out of it.

Depending on the implementation, I would expect wireheading to be much the same. Low levels of stimulation could potentially be controlled, but using to get maximum pleasure would permanently destroy the person. Our architecture isn't built for it.

Comment author: Lawliet 17 April 2009 11:40:49PM 2 points [-]

Current drugs will only give you a bit of pleasure before wrecking you in some way or another.

CronoDAS should be doing his best to stay alive, his current pain being a down payment on future real wireheading.

Comment author: Nanani 15 April 2009 01:03:13AM *  -2 points [-]

No. Just No.

A society of rational agents ought to reach the conclusion that they should WIN, and do so by any means necessary, yes? Then why not just nuke 'em? *

*replace 'nuke' with whatever technology is available; if our rationalist society has nanobots, we could modify them into something less harmful than barbarians.

Offer amnesty to barbarians willing to bandon their ways; make it as possible as we can for individual barbarians to defect to our side; but above all make sure the threat is removed. That's what constitutes winning.

Turning individual lottery-selected rationalists into "courageous soliders" is not the way to do that. That's just another way of losing.

Furthermore, the process of selecting soldiers by lottery is a laughably bad heuristic. An army of random individuals, no matter how much courage they have, is going to be utterly slaughtered by an army whose members are young, strong, fast, healthy, and all those other attributes. If the lottery is not random but instead gives higher weight to the individuals best fit to fight, then it is not different from the draft decried above.

This is a terrible post, the first one so awful that I felt moved to step out of the lurkersphere and comment on LW.

Comment author: Lawliet 15 April 2009 01:19:01AM *  13 points [-]

Don't assume the rationalists have super powerful technology.

Comment author: Eliezer_Yudkowsky 09 April 2009 12:39:28PM 2 points [-]

Which card game thus far encountered is the best rationality training mechanism, in your opinion?

Comment author: Lawliet 09 April 2009 12:59:58PM 2 points [-]

Echoing this, but dont limit your reply to solely card-games, if you have anything else to add.

Comment author: RichardKennaway 07 April 2009 06:58:35AM *  1 point [-]

Mensa themselves say they aim to take the top 2% of the population. This strikes me as too many to be useful. There are other high-IQ societies which are far more selective (Wikipedia's Mensa page has a list), but none of them are household names.

Comment author: Lawliet 07 April 2009 08:05:37AM 4 points [-]

Mensa themselves say they aim to take the top 2% of the population. This strikes me as too many to be useful.

Useful for what?

Comment author: komponisto 07 April 2009 03:23:56AM 3 points [-]

I completely agree with you on the question of self-upvotes.

In fact, there's yet another option: do away with the automatic self-upvote, so that users may actively vote on their own comments just like anyone else's (with the same impact on karma). This doesn't sound like a major change, but who knows -- the results may be surprising.

On the question of explanations, I'm less sure (cf. my reply to Yvain's comment that you quoted). If I put a lot of thought into a comment, and it gets downvoted, I'm going to be perplexed enough to want an explanation. In particular, I won't appreciate an implicit suggestion that I didn't put a lot of thought into the comment.

Comment author: Lawliet 07 April 2009 03:30:53AM 1 point [-]

If I thought my own comment was downvote-worthy, I probably wouldn't have posted it

When downvoted you can hope for an explanation, and you can hate it when people don't give one, but forcing one?

Comment author: Lawliet 07 April 2009 03:20:11AM 2 points [-]

Since I would not be able to upvote my comment, upvoting someone else's comment would suggest that I think their comment is better than my own

Huh? If you have no ability to upvote yourself, why would upvoting someone else's comment indicate it's better than yours?

View more: Prev | Next