Comment author: undermind 14 April 2011 11:20:15PM 0 points [-]

The image is using sex to encourage people to learn and act rationally (I have no idea how that might work).

There's a grand tradition of women withholding sex for political reasons (usually to end a war), starting with Lysistrata. People resurrect this idea from time to time, and often achieve quite remarkable results.

Comment author: Fleisch 12 December 2011 09:58:15AM 6 points [-]

As an aside: The interesting thing to remember about Lysistrata is that it's originally intended as humorous, as the idea that women could withhold sex, especially withhold it better than men, was hilarious at the time. Not because they weren't allowed, but because they were the horny sex back then.

Comment author: AnnaSalamon 17 November 2011 11:56:10PM *  29 points [-]

I've seen this tried, for this stated purpose. My impression of the results was that it did not at all lead to careful, on-the-margins consequentialist thinking and doing. Instead, it led to a stressed out, strung out person trying desperately to avoid more pain/shame, while also feeling resentful at the world and themselves, expecting a lack of success from these attempts, and so acting more from local self-image gradients, or drama-seeking gradients, than from any motives attached to actual hope of accomplishing something non-immediate.

"Signaling motives" can be stuck on a scale, from "local, short-sighted, wire-heading-like attempts to preserve self-image, or to avoid immediate aversiveness or seek immediate reward" to "long-term strategic optimization to achieve recognition and power". It would be better to have Napoleon as an ally than to have a narcotics addict with a 10 minute time horizon as an ally, and it seems analogously better to help your own status-seeking parts mature into entities that are more like Napoleon and less like the drug addict, i.e. into entities that have strategy, hope, long-term plans, and an accurate model of the fact that e.g. rationalizations don't change the outside world.

Comment author: Fleisch 18 November 2011 09:03:56PM 17 points [-]

tl;dr: Signalling is extremely important to you. Doing away with your ability to signal will leave you helplessly desperate to get it back.

I think that this is a point made not nearly often enough in rationalist circles: Signalling is important to humans, and you are not exempt just because you know that.

In response to Existential Risk
Comment author: Fleisch 15 November 2011 04:11:23PM *  15 points [-]

There aren't enough nuclear weapons to destroy the world, not by a long shot. There aren't even enough nuclear weapons to constitute an existential risk in and off themselves, though they might still contribute strongly to the end of humanity.

EDIT: I reconsidered, and yes, there is a chance that a nuclear war and its aftereffects permanently cripples the potential of humanity (maybe by extinction), which makes it an existential risk. The point I want to make, which was more clearly made by Pfft in a child post, is that this is still something very different from what Luke's choice of words suggests.

How many people will die is of course somewhat speculative, but I think if the war itself killed 10%, that would be a lot. More links on the subject: The effects of a Global Thermonuclear War Nuclear Warfare 101, 102 and 103

Comment author: jimrandomh 10 June 2011 06:36:42PM 6 points [-]

One thing I've thought would be good to have is a program that takes math formulas and damages them, to produce plausible, similar-looking formulas but with terms missing or altered. This would be used to make a set of flash cards where you have to distinguish between real and damaged formulas.

Comment author: Fleisch 12 June 2011 02:33:10AM 3 points [-]

I think that you shouldn't keep false formulas so as to not accidentally learn them. In general, this sounds like you could hit on memetically strong corruptions which could contaminate your knowledge.

Comment author: Morendil 28 May 2011 07:19:53AM *  14 points [-]

Skill: negotiation - deliberately reaching a mutually beneficial trade or agreement, even in situations of slight power imbalance. Important for rationalists who aim at earning money as an instrumental goal.

(At the 5-second level a key component of this is learning to say "no", being able to overcome one's agreeableness as the default decision.)

(For some reason negotiation in situations of extreme power imbalance seems like it should have a different name, and I don't know what that should be.)

Comment author: Fleisch 01 June 2011 10:02:17AM *  0 points [-]

(For some reason negotiation in situations of extreme power imbalance seems like it should have a different name, and I don't know what that should be.)

Dominance or Authority spring to mind. In this video Steven Pinker argues that there are three basic relationship types, authority, reciprocity and communality, and negotiation in extreme power imbalance sounds like it uses the social rules for authority rather than reciprocity.

Comment author: Kevin 25 May 2011 11:51:22AM 3 points [-]

Previously discussed a fair amount on Less Wrong. I made a wiki article and linked some of the articles/comments.

http://wiki.lesswrong.com/wiki/Spaced_repetition

Comment author: Fleisch 31 May 2011 10:37:50AM 0 points [-]

Thank you for the link!

I think would fit well into the introduction. You (or rather Luke Grecki) could just split the "spacing effect" link into two.

Comment author: Fleisch 25 May 2011 10:12:52AM 0 points [-]

This seems to be a useful technique, thanks for introducing it.

I have a bit of criticism concerining the article: It needs more introduction. Specifically, I would guess I'm not the only one who doesn't know what SR is in the first place; a few sentences of explanation would surely help.

In response to Singularity FAQ
Comment author: Fleisch 03 May 2011 02:59:59PM 0 points [-]

Thank you so much for writing this!

Comment author: RobinZ 13 October 2010 03:58:32PM 5 points [-]

But the quote is from a hacker news thread, isn't it? Would we want to stop quoting Dennett's books if he became a regular here?

Comment author: Fleisch 13 October 2010 07:58:07PM *  0 points [-]

Probably not, but you wouldn't (need to) quote what he wrote here.

EDIT: Or rather, what he's writing since he's here, unless it's still novel to LW.

Comment author: Fleisch 08 October 2010 12:09:35PM 24 points [-]

Every time you imagine a person, that simulated person becomes conscious for the time of your simulation, therefore, it is unethical to imagine people. Actually, it's just morally wrong to imagine someone suffering, but for security reasons, you shouldn't do it at all. Reading fiction (with conflict in it) is, by conclusion, the one human endeavor that has caused more suffering than anything else, and the FAIs first action will be to eliminate this possibility.

View more: Next