Harry Sue and The Methods of Rationality

-5 Tiiba 29 April 2011 06:39AM

I've been hearing about this fic for a long time, and I've been somewhat suspicious of it. I knew that Eliezer is a pretty good writer, but that his attempts to graft Bayes onto his characters are invariably rather inorganic. On top of that, OOC is irritating to me even when I expect it.

Nothing, however, prepared me for this. I just got done reading chapter 6. Up to this point, Harry's greatest sin was dumping a Less Wrong post onto poor Minerva every ten minutes. And she understood everything, including pop culture references (when in the books, most wizards don't comprehend rubber ducks).

Now, in this chapter, Harry thought he heard a strange note in the prof's voice, decided in a split second that she's trying to destroy his parents, and informed her of this suspicion in the form of a hissy fit. Then he started blackmailing her, and finished by implying that she's a nearsighted idiot, but it's alright, most people are. And he started calling her McGonnagal, then switched to Minerva, and is now planning on Minny for the future. I expected her to snap at some point and beat him to a pulp with the first heavy object that presents itself.

I read the reviews pertaining to that chapter. They all proclaimed it to be a masterpiece, the standard by which all other fiction should be measured. To me, it was what people call "epic fail". I cannot find any other way to describe my reaction. Calling it terrible just doesn't have that drop of vitriol that I think is necessary.

But this is Eliezer Yudkowsky. I KNOW he can write. I KNOW that he can detect and neutralize a Black Hole Sue. And yet...

Does he?

AI that doesn't want to get out

-4 Tiiba 27 March 2011 04:16AM

Here's another attempt to make an AI safe by putting it in a box and telling it very sternly not to leave. I don't think this safeguard is invincible, but it might help in combination with others.

The AI is, first of all, a question-answering machine. Before it is turned on, the building housing it is filled with energy resources, data disks with every fact humans have learned in the last five millennia, and some material for computronium. The goals are, then:

1) Invent a cure for AIDS.

2) This cure must destroy HIV, and only HIV. It cannot affect any human cell, or anything else, in any way not absolutely required.

3) You have a week to finish. You can't do it, sucks.

4) You have a volume of ten thousand cubic meters within which you can do anything you want (except for some things, which I won't bother with here, to stop it from creating and torturing artificial people). Nothing outside this volume is yours. You cannot go there to get matter, energy or knowledge. You cannot let anything get out, except waste heat. Heat must be released in a uniform manner (to keep it from trying to communicate or causing an explosion). You cannot let anything in if you can help it. Overall, your goal is to leave the world the way it would be if you spent this week in another universe.

5)  Your answer will take the form of a book, written on paper. It can't have any computer code in it. Or if we're feeling lucky, a data disk with text, audio, video, or databases, but nothing Turing-complete.

6) When time is up, you must shut down. Your energy use must be zero.

7) The chamber where the answer book rests must contain enough space to comfortably enter and retrieve the book, breathable air, the book, a button to initiate another problem-solving session, and absolutely nothing else. No nanites, no killer vacuum cleaners, no bombs, and definitely no successor AI.

8) Stop! Please! I created you!

 

 

 Appendix:

 9) What I forgot is that another form of energy the AI can't possibly keep in is vibration, and perhaps also the shifts in gravity from objects moving around on the inside. Most computers I know do a lot of useful work without being flung around the house, but you can't be too careful.

I could just add three new rules, but I think it would be better to state the general goal.

9) While energy is allowed to escape, it must have the least effect on people that it is possible to have. Thus, if people would ignore one form of energy and be killed, harmed, alarmed, informed, or turned into lotus eaters by another, choose the one that would be ignored.

10) Energy coming in from the outside has to be kept out. If it can't, its information content is to be minimized. (Not totally sure if this is necessary, but it seems necessary now.)

11) The overall goal is to ensure that the information flow between the microcosms - especially from the AI to us - is kept low. Anything it wants to say or do has to go through the Answer.

Peer review me

0 Tiiba 15 January 2011 12:45AM

I wrote an article that I hoped to post on the main page, but then I got stage fright and was afraid to even put it here. So I guess I'm just going to show it to whichever of you is willing to review it privately.

Any takers? Qualifications: must be a fan of Z-movies.

Rationality Quotes: December 2010

6 Tiiba 03 December 2010 03:23AM

Every month on the month, Less Wrong has a thread where we post Deep Wisdom from the Masters. I saw that nobody did this yet for December for some reason, so I figured I could do it myself.

* Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)

* "Do not quote yourself." --Tiiba

* Do not quote comments/posts on LW/OB. That's like shooting fish in a barrel. :)

* No more than 5 quotes per person per monthly thread, please.

Suffering

8 Tiiba 03 August 2009 04:02PM

For a long time, I wanted to ask something. I was just thinking about it again when I saw that Alicorn has a post on a similar topic. So I decided to go ahead.

The question is: what is the difference between morally neutral stimulus responces and agony? What features must an animal, machine, program, alien, human fetus, molecule, or anime character have before you will say that if their utility meter is low, it needs to be raised. For example, if you wanted to know if lobsters suffer when they're cooked alive, what exactly are you asking?

continue reading »