Comment author: HughRistik 14 September 2009 10:36:09PM *  7 points [-]

There is a sub-blogosphere focused on a particular conception of male and female psychology, centered on the blogger Roissy, which owes a lot to evolutionary psychology.

I'm a big fan of evolutionary psychology, including practical applications of it. Roissy makes a good start attempting to apply it, but he falls prey to major ideological errors, overgeneralization, and oversimplification. I see no evidence that he has read more than a few popular books on the subject. He has made the discovery that even naive applications of evolutionary psychology can be incredibly powerful in the practical world, then falls into the naive realist pit and assumes that his theories are true just because they work better than the conventional alternatives. Furthermore, he fails at ethics really, really badly. I'm being kinda vague, but I'll go into further detail upon request.

Evolutionary psychology is great. Applied evolutionary psychology is great. Roissy just isn't doing it right.

Comment author: DonGeddis 17 September 2009 01:52:14AM -1 points [-]

It's hard to discuss the subject with the debate becoming emotional, but let me just say that Roissy's goals are to be an entertaining writer, to succeed at picking up women, and to debunk false commonsense notions of dating, through real-life experience.

He's not trying to submit a peer-reviewed paper on evo psych to a rationality audience. To judge him on that basis is to kind of miss the point.

(Ethics is a whole separate question. But then, Stalin was a atheist too, wasn't he?)

Comment author: Tyrrell_McAllister 16 September 2009 10:25:06PM *  5 points [-]

My point is that it's no more convenient than having the pseudo-random number generator available. I maintain that the generator is implementing your memory in functionally the same sense. For example, you are effectively guaranteed not to get the same number twice, just as you are effectively guaranteed not to get the same poker chip twice.

ETA: After all, something in the generator must be keeping track of the passage of the marbles for you. Otherwise the generator would keep producing the same number over and over.

Comment author: DonGeddis 16 September 2009 11:57:33PM 4 points [-]

Rather than using a PRNG (which, as you say, requires memory), you could use a source of actual randomness (e.g. quantum decay). Then you don't really have extra memory with the randomized algorithm, do you?

Comment author: timtyler 01 September 2009 09:49:09PM -1 points [-]

Normally testing is done in an offline "testing" mode - using a test harness or sandbox arrangement. Tests themselves are consequently harmless.

Of course it is possible for the world to present eventualities that are not modelled by the test suite - but that's usually no big deal.

I don't think it is realistic to confine machine intelligence to the domain of provably correct software. Anyone trying that approach would rather obviously be last to the marketplace with a product.

I seriously doubt whether paranoid fantasies about DOOM will hinder progress towards machine intelligence significantly. I expect that the prophets of DOOM will be widely ignored. This isn't exactly the first time that people have claimed that the world is going to end.

Comment author: DonGeddis 02 September 2009 12:01:34AM 1 point [-]

Forget about whether your sandbox is a realistic enough test. There are even questions about how much safety you're getting from a sandbox. So, we follow your advice, and put the AI in a box in order to test it. And then it escapes anyway, during the test.

That doesn't seem like a reliable plan.

Comment author: DonGeddis 21 August 2009 04:59:35PM 3 points [-]

Re: abiogenesis. You say:

we know of no mechanism under which creation of life seems even remotely plausible.

For a plausible mechanism, see this video. (It starts with anti-creationism stuff; skip to 2:45 to watch the science.)

Comment author: JGWeissman 10 August 2009 10:01:29PM 4 points [-]

What if we transform the problem, so that you have the opportunity to pay $60 for a 1% chance to gain $5000?

Comment author: DonGeddis 10 August 2009 10:29:16PM 7 points [-]

Exactly! This is gambling, isn't it? A small expected loss, with a tiny chance of some huge gain.

If your utility for money really is so disproportionate to the actual dollar value, then you probably ought to take a trip to Las Vegas and lay down a few long-odds bets. You'll almost certainly lose your betting money (but you wouldn't "notice it in [your] monthly finances"), while there's some (small) chance that you get lucky and "change [your] month considerably".

It's not hypothetical! You can do this in the real world! Go to Vegas right now.

(If the plane flight is bothering you, I'm sure we could locate some similar online betting opportunities.)

Comment author: DonGeddis 09 August 2009 06:50:56PM 6 points [-]

I think there's also a short-term/long-term thing going on with your examples. The drunk really wants to drink in the moment; they just don't enjoy living with the consequences later. Similarly, in the moment, you really do want to continue reading Reddit; it's only hours or days later that you wish you had also managed to complete that other project which was your responsibility.

I bet there's something going on here, about maximizing integrated lifetime happiness, vs. in-the-moment decision-making, possibly with great discounts to those future selves who will suffer the negative effects.

Comment author: DonGeddis 08 August 2009 12:44:31AM 2 points [-]

I'm curious if Eliezer (or anyone else) has anything more to say about where the Born Probabilities come from. In that post, Eliezer wrote:

But what does the integral over squared moduli have to do with anything? On a straight reading of the data, you would always find yourself in both blobs, every time. How can you find yourself in one blob with greater probability? What are the Born probabilities, probabilities of? [...] I don't know. It's an open problem. Try not to go funny in the head about it.

Fair enough. But around the same time, Eliezer suggested Drescher's book Good and Real, which I've been belatedly making my way through.

And then, on pages 150-151, I see that Drescher actually attempts to explain (derive?) the Born probabilities. He also says that we can "reach the same conclusion [...] by appeal to decision theory," and references Deutsch 1999 ("Quantum Theory of Probability and Decisions") and Wallace 2003 ("Quantum Probability and Decision Theory, Revisited").

My problem: I still don't get it. I loved Eliezer's commonsense explanation of QM and MWI. I'm looking for something at the same level, just as intuitive, for the Born probabilities.

Anyone willing and able to take on that challenge?

Comment author: Eliezer_Yudkowsky 31 July 2009 06:31:33AM 4 points [-]

I'm not sure what the Dust is, but it is learning. I named it after the scariest thing imaginable. I had in mind something as alien as natural selection, but not actually working that particular way, and with the singleminded focus of a paperclip maximizer.

Comment author: DonGeddis 31 July 2009 08:48:27PM 4 points [-]

"Dust" has been used in SF for nanotech before. And especially runaway nanotech, that is trying to disassemble everything, like a doomsday war weapon that got out of control. I recalled the paperclip maximizer too. Oh, and the Polity/Cormac SF books by Neal Asher, with Jain nodes (made by super AIs) that seem to have roughly the same objective.

Comment author: DonGeddis 17 July 2009 03:59:32AM 26 points [-]

Is there anything that you consider proven beyond any possibility of doubt by both empirical evidence and pure logic, and yet saying it triggers automatic stream of rationalizations in other people?

  • Hitler had a number of top-level skills, and we could learn (some) positive lessons from his example(s).

  • Eugenics would improve the human race (genepool).

  • Human "racial" groups may have differing average attributes (like IQ), and these may contribute to the explanation of historical outcomes of those groups.

(Perhaps these aren't exactly topics that Less Wrong readers (in particular) would run away from. I was attempting to answer the question by riffing off Paul Graham's idea of taboos. What is it "not appropriate" to talk about in ordinary society? Politeness might trigger the rationalization response...)

Comment author: rlpowell 08 April 2009 07:16:03PM 8 points [-]

This is sort-of true, but with one really, really big caveat that people seem to forget: any form of fighting that is controlled basically screws large portions of many styles.

If you go into an MMA tournament and deliberately break someone's arm, you aren't going to be asked back. Let alone if you break their neck. Furthermore, non-crazy martial artists don't even want to: there's too much respect for that. There are styles that are centered around causing maximum damage as quickly as possible, and they are entirely useless in MMA fights. You're never going to see a hard-style master being competitive in an MMA tournament, because 90% of what they know is irrelevant.

-Robin

Comment author: DonGeddis 08 May 2009 04:06:51AM 13 points [-]

rlpowell, you are incorrect. You are spouting an untested theory that is repeated as fact by those with a vested interest in avoiding the harsh light of truth.

In actual fact, there is no problem with breaking someone's arm in an MMA fight (see Mir vs. Sylvia in the UFC, for example). It's also close to impossible to break someone's neck (deliberately), despite what you may see in movies.

The "we're too dangerous to fight" is an easy meme to propagate. But let me just ask you this: let's just say, hypothetically, that your theory ("maximum damage" masters are "useless in MMA fights") was false. How would you ever know? Assuming that someone did not yet have a belief about that proposition, what kind of evidence are you actually aware of, about whether the statement is true or false?

View more: Prev | Next