Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Mike_Blume 26 February 2009 08:50:06PM 0 points [-]

Jeremy: we're drifting from the topic, but I don't believe the Final Fantasy games are produced, distributed, or sold by Sony. Thus the decision to release FF for multiple platforms was not a decision made by Sony, simply one which affected Sony.

Comment author: Mike_Blume 26 February 2009 06:01:18AM 2 points [-]

John: I don't think Eliezer's saying that a stock that has recently risen is now more likely to fall. Quite the opposite in fact. Any given stock should be about as likely to fall as to rise, at least if we weight by the amount of the rise. That is, if I hold a share of XYZ, which costs $100, and I anticipate a 99% chance that the stock will rise to $101 tomorrow, then I should also expect a 1% chance that the stock will drop to $1 tomorrow. Were that not true, the share would be worth nearly $101 *right now*, not tomorrow.

See also: Conservation of Expected Evidence

Comment author: Mike_Blume 03 February 2009 11:19:00PM 0 points [-]

Peter - I am, sadly, not an astrophysicist, but it seems reasonable that such an act would substantially decrease the negentropy available from that matter, which is important if you're a species of immortals thinking of the long haul.

Comment author: Mike_Blume 03 February 2009 10:31:10AM 4 points [-]

Svein: No, you've got to suggest someone else to stun, I'm pretty sure.

I doubt Eliezer's grand challenge to us would be to contribute less than four bits to his story.

Comment author: Mike_Blume 03 February 2009 10:18:41AM 0 points [-]

Carl - I'm pretty sure either way we get three more chapters.

Comment author: Mike_Blume 02 February 2009 12:36:00AM 3 points [-]

Eliezer, if I understand you correctly, you would prefer a universe tiled with paperclips to one containing both a human civilization and a babyeating one. Let us say the babyeating captain shares your preference, and you and he have common knowledge of both these preferences.

Would you now press a button exterminating humanity?

Comment author: Mike_Blume 01 February 2009 09:27:29AM 7 points [-]

Apparently the Super Happy race have adopted knuth arrow notation more broadly than we have.

In response to OB Status Update
Comment author: Mike_Blume 27 January 2009 09:50:54PM 0 points [-]

Ian: A public password anonymous account is good, but that account cannot be able to delete or edit its own posts, or you can have chaos. On Reddit once, a guy started a thread for interesting confessions, and created an account for the occasion, whose password he made public. Some good stories went up, and then were deleted by a random vandal a few hours later.

Comment author: Mike_Blume 21 January 2009 06:14:54AM 9 points [-]

Extending Aurini's point, I think it is worth asking to what extent we have already integrated catpeople into our culture today. I think many of us would agree that the women featured in pornographic films are catwomen of a kind. What about pop stars, boy bands, etc.? What about mainstream fiction? On Firefly, Kaylee is beautiful, has an above-female-average sex drive, and falls in love with the introverted, socially awkward intellectual character - isn't she exactly the sort of catgirl most male sci-fi fans would want?

It seems like the problems you've identified here don't suddenly begin at the moment you switch on a fully convincing interactive simulation of a human being - there is a continuum, and as our technology progresses, we will naturally tend to move down it. Where shall those of us who look ahead and wish for a eudaemonic future dig our trenches and hold our ground?

(posting from a different homepage today - it seemed appropriate, given the topic)

In response to Sympathetic Minds
Comment author: Mike_Blume 19 January 2009 04:32:47PM 5 points [-]

"To a paperclip maximizer, the humans are just machines with pressable buttons. No need to feel what the other feels - if that were even possible across such a tremendous gap of internal architecture. How could an expected paperclip maximizer "feel happy" when it saw a human smile? "Happiness" is an idiom of policy reinforcement learning, not expected utility maximization. A paperclip maximizer doesn't feel happy when it makes paperclips, it just chooses whichever action leads to the greatest number of expected paperclips. Though a paperclip maximizer might find it convenient to display a smile when it made paperclips - so as to help manipulate any humans that had designated it a friend."

Correct me if I'm wrong, but haven't you just pretty accurately described a human sociopath?

View more: Next