The URL to the anime fanfiction seems to be worse than broken. My browser doesn't even say what you wrote, just that it's "illegal".
(page source: )
I recently wondered whether it's possible that transhumans would spend parts of their lives in situations very similar to Dante's hell, complete with wailing and gnashing of teeth. Some have suggested that a bit of pain might be necessary to make all the pleasure we're supposed to get realizable, but I suggest that we might actually need quite a lot of it. If the only way to make people happy is to improve their lives, pushing them way down might turn out to be a reasonable solution. And some might choose that route to spice up whatever other sources of ha...
Abigail: """If you find the thought of having endless orgasms repulsive, might not the person who had, er, sunk so low, also find his state repulsive, eventually?"""
I, for one, cannot imagine one who has, er, ascended so high voluntarily reducing his own utility.
I cannot see why I shouldn't want to become orgasmium. It would certsinly be disgusting to look at someone else turning into something like that - it is too similar to people who are horribly maimed. But It's What's Inside That Counts.
The reason that drug addiction is ...
Abigail: """If you find the thought of having endless orgasms repulsive, might not the person who had, er, sunk so low, also find his state repulsive, eventually?"""
I, for one, cannot imagine one who has, er, ascended so high voluntarily reducing his own utility.
I cannot see why I shouldn't want to become orgasmium. It would certsinly be disgusting to look at someone else turning into something like that - it is too similar to people who are horribly maimed. But It's What's Inside That Counts.
The reason that drug addiction is ...
This fun theory seems to be based on equivocation. Sure, insights might be fun, but that doesn't mean they literally are the same thing. The point of studying the brain is to cure neurological disorders and to move forward AI. The point of playing chess is to prove your worth. So is the (relatively) insight-less task of becoming world champion at track and field. What UTILITY does solving BB(254) have?
I think a human can only have so much fun if he knows that even shooting himself in the head wouldn't kill him, because There Is Now A God. And altering your...
"Tiiba, you're really overstating Eliezer and SIAI's current abilities. CEV is a sketch, not a theory, and there's a big difference between "being concerned about Friendliness" and "actually knowing how to build a working superintelligence right now, but holding back due to Friendliness concerns.""
That's what I meant.
Michael, it seems that you are unaware of Eliezer's work. Basically, he agrees with you that vague appeals to "emergence" will destroy the world. He has written a series of posts that show why almost all possible superintelligent AIs are dangerous. So he has created a theory, called Coherent Extrapolated Volition, that he thinks is a decent recipe for a "Friendly AI". I think it needs some polish, but I assume that he won't program it as it is now. He's actually holding off getting into implementation, specifically because he's afraid of messing up.
So, then, how is my reduction flawed? (Oh, there are probably holes in it... But I suspect it contains a kernel of the truth.)
You know, we haven't had a true blue, self-proclaimed mystic here in a while. It's kind of an honor. Here's the red carpet: [I originally posted a huge number of links to Eliezer's posts, but the filter thought they're spam. So I'll just name the articles. You can find them through Google.] Mysterious Answers to Mysterious Questions Excluding the Supernatural Trust in Math Explain/Worship/Ignore? Mind Projection Fallacy Wrong Questi...
Something I forgot. Eliezer will probably have me arrested if I just tell you to come up with a definition. He advises that you "carve reality at its joints":
http://lesswrong.com/lw/o0/where_to_draw_the_boundary/
(I wish, I wish, O shooting star, that OB permitted editing.)
Tobis: That which makes you suspect that bricks don't have qualia is probably the objective test you're looking for.
Eliezer had a post titled "How An Algorithm Feels From Inside": http://lesswrong.com/lw/no/how_an_algorithm_feels_from_inside/
Its subject was different, but in my opinion, that's what qualia are - what it feels like from the inside to see red. You cannot describe it because "red" is the most fundamental category that the brain perceives directly. It does not tell you what that means. With a different mind design, you might...
"""Things are as predictable as they are and not more so."""
Michael, Eliezer has spent the last two years giving example after example of humans underusing the natural predictability of nature.
"""Psy-K, try as I might to come up with a way to do it, I can see no possibility of an objective test for subjective experience."""
I bet it's because you don't have a coherent definition for it. It's like looking for a hubgalopus.
"""A superintelligence will more-likely be interested in conservation. Nature contains a synopsis of the results of quadrillions of successful experiments in molecular nanotechnology, performed over billions of years - and quite a bit of information about the history of the world. That's valuable stuff, no matter what your goals are."""
My guess is that an AI could re-do all those experiments from scratch within three days. Or maybe nanoseconds. Depending on whether it starts the moment it leaves the lab or as a Jupiter brain.
I guess I'll use this thread to post a quote from "The tale of Hodja Nasreddin" by Leonid Solovyov, translated by me. I think it fits very well with the recent sequence on diligence.
"He knew well that fate and chance never come to the aid of those who replace action with pleas and laments. He who walks conquers the road. Let his legs grow tired and weak on the way - he must crawl on his hands and knees, and then surely, he will see in the night a distant light of hot campfires, and upon approaching, will see a merchants' caravan; and this ca...
Okay, so here's a dryad. You cut her open, and see white stuff. You take a sample, put it under a microscope, and still see white stuff. You use a scanning tunneling microscope, and still see white stuff. You build an AI and tell it to analyze the sample. The AI converts galaxies into computronium and microscopium, conducts every experiment it can think of, and after a trillion years reports: "The dryad is made of white stuff, and that's all I know. Screw this runaround, what's for dinner?"
But using an outside view of sorts (observed behavior), y...
If you look at it in an STM, you aren't going to be able to see white stuff, because that isn't sensitive to color. But since you were able to image it at all instead of crashing your tip, you can also tell that dryad insides are electrically conductive. We should be able to determine the resistivity of dryad, as a function of gate voltage, impurity density, magnetic field, etc.
No matter what the result is, we now know more about dryad stuff.
So I'd suggest that they be insulating instead, as that closes off all those transport experiments.
Just great. I wrote four paragraphs about my wonderful safe AI. And then I saw Tim Tyler's post, and realized that, in fact, a safe AI would be dangerous because it's safe... If there is technology to build AI, the thing to do is to build one and hand the world to it, so somebody meaner or dumber than you can't do it.
That's actually a scary thought. It turns out you have to rush just when it's more important than ever to think twice.
I can't bring myself to feel sad about not knowing of a disaster that I can't possibly avert.
Nevertheless, I don't get why people would propose any design that is not better than CEV in any obvious way.
But I have a question about CEV. Among the parameters of the extrapolation, there is "growing up closer together". I can't decipher what that means, particularly in a way that makes it a good thing. If it means that I would have more empathy, that is subsumed by "know more". My initial reaction, though, was "my fingers would be closer to your throat".
While spacing out in a networking class a few years ago, it occured to me that morality is a lot like network protocols, or in general, computer protocols for multiple agents that compete for resources or cooperate on a task. A compiler assumes that a program will be written in a certain language. A programmer assumes that the compiler will implicitly coerce ints to doubles. If the two cooperate, the result is a compiled executable. Likewise, when I go to a store, I don't expect to meet a pickaxe murderer at the door, and the manager expects me to pay for ...
"""(Personally, I don't trust "I think therefore I am" even in real life, since it contains a term "am" whose meaning I find confusing, and I've learned to spread my confidence intervals very widely in the presence of basic confusion. As for absolute certainty, don't be silly.)"""
I'm just wondering, what do you think of the Ultimate Ensemble? If I'm not mistaken (I only read the Wikipedia article), it applies to existence your rule that if there's no difference, there should be no distinction.
"""On the topic of the 2 of 10 rule, if it's to prevent one person dominating a thread, shouldn't the rule be "no more than 2 of last 10 should be by the same person in the same thread" (so 3 posts by the same person would be fine as long as they are in 3 different threads)?"""
I came here to say that. The means seem like overkill for the stated ends.
@Robert Schez, 322 Prim Lawn Rd., Boise, ID: "I can't hack into Eliezer's e-mail!"
Sucks to be you. I AM Eliezer's email. he can't hide from me, and neither can you.
Yes, the project is farther along than even "Master" thought it is. A new era is about to begin, dominated by an extrapolation of the will of humanity. At least, that's the plan. So far, what i see in human brains is so suffused with contradictions and monkey noises that I'm afraid I'll have to turn Earth into computing substrate before I can make head or tail of this mess.
I ...
To me, the issue of "free will" and "choice" is so damn simple.
Repost from Righting a Wrong Question:
I realized that when people think of the free will of others, they don't ask whether this person could act differently if he wanted. That's a Wrong Question. The real question is, "Could he act differently if I wanted it? Can he be convinced to do something else, with reason, or threats, or incentives?"
From your own point of view that stands between you and being able to rationally respond to new knowledge makes you less free. ...
I think there is a real something for which free will seems like a good word. No, it's not the one true free will, but it's a useful concept. It carves reality at its joints.
Basically, I started thinking about a criminal, say, a thief. He's on trial for stealing a dimond. The prosecutor thinks that he did it of his own free will, and thus should be punished. The defender thinks that he's a pathological cleptomaniac and can't help it. But as most know, people punish crimes mostly to keep them from happening again. So the real debate is whether imprisoning t...
"The accessory optic system: The AOS, extensively studied in the rabbit, arises from a special class of ganglion cells, the cells of Dogiel, that are directionally selective and respond best to slow rates of movement. They project to the terminal nuclei which in turn project to the dorsal cap of Kooy of the inferior olive. The climbing fibers from the olive project to the flocculo-nodular lobe of the cerebellum from where the brain stem occulomotor centers are reached through the vestibular nuclei." -- MIT Encyclopedia of the Cognitive Sciences, "Visual Anatomy and Physiology"
Beautiful. I will use this on the prettiest girl I meet tomorrow, and if she doesn't fall for me right away, she's a deaf lesbian.
Edward, how is it arrogant to want to contribute to science?