Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Justin_Corwin 26 April 2009 08:49:33PM 0 points [-]

This summary was very interesting. Even if you pull back your time commitments, It would be nice if you published progress reports from time to time. I've been trying to engage with LessWrong out of interest in it's purpose and trajectory, but I missed a lot of the stuff you linked to. It was very helpful and has improved my opinion of the place.

I'm still mostly reading, but there is something nascent about it that intrigues me. I'm not sure that it's right to judge things based on what they might become, but that's all I have at the moment.

In response to An African Folktale
Comment author: Justin_Corwin 16 February 2009 06:57:10AM 8 points [-]

ugh. Really? Now we're taking a single depressing moralistic story and projecting statements across a population/culture based on it?

Really? And it's characteristic and telling in a way that say, the story of Job or The Toad and the Scorpion, or The Three Little Men in the Wood aren't about Western Nations? Or are we moving backwards from the knowledge that Africa has a "failing" culture?

Comment author: Justin_Corwin 23 October 2008 08:13:00PM 0 points [-]

Eliezer: "I don't know if I've mentioned this publicly before..."

You definitely haven't mentioned that publicly in any place that I read, which makes me glad I decided to dip into the comments of this post. I always felt a tacit acceptance, or at least no active disagreement on your part of Jef's posts on similar subjects on SL4 and other online fora. (at least any available to immediate recall)

The subject of what parts of my influences, tendencies and opinions, and identifiable hardware quirks I call 'myself' is a driver of cycles of stress and self-doubt to me. Nobody here has mentioned anything similar, but I tend to experience this in two ways, extreme doubt and nostalgia regarding tendencies and traits I've eliminated or lost from myself, and increasing ambivalence and bifurcated opinion about things I feel I'm going to have to jettison or become averse to in the future.

The concrete issue of techniques to condition against bias or mental procedures to work through unwanted tendencies is something I'm always fascinated to hear scraps of from other rationalists and self-modifiers. It can't all be rationalization and reflection, I know from personal experience that can't touch everything, so how do other people correct? My own procedures seem fairly haphazard and I adopt them only out of pragmatism, not out of any confidence in their theoretical grounding.

Comment author: Justin_Corwin 02 September 2008 04:55:57PM 5 points [-]

whether or not one agrees with the soldier quote, what does it have to do with rationality?

In response to I'd take it
Comment author: Justin_Corwin 02 July 2008 05:04:52PM 0 points [-]

I have to echo Eliezer's fear of useful spending. I have some contingencies, but I could not absorb that much money intelligently.

To spend any fraction of that, you'd need an organization, and synchronizing an organization's goals with my own is another hard problem.

If it's existing wealth, then destroying the unusable portion of it does make everyone else proportionally richer, at least until the markets recover. If it's new wealth, then probably the best thing to do would be to stabilize existing selected markets with enormous and slow moving investments, until I could determine what to do.

Comment author: Justin_Corwin 10 December 2007 08:37:56AM 1 point [-]

last time we spoke about it, Eliezer was of the opinion that the last scene implies that A***** V**** failed. I thought it was more ambiguous than that.

Comment author: Justin_Corwin 10 December 2007 07:50:28AM 5 points [-]

I have also speculated on the need for a strong exterior threat. The problem is that there isn't one that wouldn't either be solved too quickly, or introduce it's own polarizing problems.

A super villain doesn't work because they lose too quickly, see Archimedes, Giorgio Rosa, et al.

Berserkers are bad because they either won't work or work too well. I can't see any way to make them a long term stable threat without explicitly programming them to lose.

Rogue AI doesn't work, again because it either self-destructs or kills us too quickly, or possibly sublimes, depending on quality and goal structure.

The best proposal I've ever heard is a rival species, something like an Ant the size of a dog, whose lack of individual intelligence was offset by stealth hives, co-op, and physical toughness. But it would be hard to engineer one.

In response to Superhero Bias
Comment author: Justin_Corwin 01 December 2007 03:36:38AM 3 points [-]

What? I didn't realize humility had become an objective value that changes the results of your actual actions. Who cares why people save lives? Or how brave they are inside?

Ghandhi built a movement that your anonymous nonviolent protester belonged to, and has inspired millions to be better people. I think that's a plus, and I don't really care if someone 'more deserving' 'sacrificed more' in a greater cause, but to less effect.