Open Thread, November 1 - 7, 2013

5 Post author: witzvo 02 November 2013 04:37PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Comments (299)

Comment author: Nick_Tarleton 04 November 2013 01:42:46AM 11 points [-]

Have Eliezer's views (or anyone else's who was involved) on the Anthropic Trilemma changed since that discussion in 2009?

Comment author: Eliezer_Yudkowsky 06 November 2013 08:55:26PM 5 points [-]

There's no brief answer. I've been slowly gravitating towards, but am not yet convinced, by the suspicion that making a computer out of twice as much material causes there to be twice as much person inside. Reason: No exact point where splitting a flat computer in half becomes a separate causal process, similarity to behavior of Born probabilities. But that's not an update to the anthropic trilemma per se.

Comment author: Armok_GoB 06 November 2013 09:42:24PM 8 points [-]

Hmm, conditional on that being the case, do you also believe that the closer to physics the mind is the more person it is in it? Example: action potentials encoded in the position of rods in a babbage engine vs. spread over fragmented ram used by a functional programing language using lazy evaluation in the cloud.

Comment author: Eliezer_Yudkowsky 07 November 2013 02:49:42AM 4 points [-]

Good question. Damned if I know.

Comment author: Psy-Kosh 09 November 2013 09:59:30PM 1 point [-]

That seems to be seriously GAZP violating. Trying to figure out how to put my thoughts on this into words but... There doesn't seem to be anywhere that the data is stored that could "notice" the difference. The actual program that is being the person doesn't contain a "realness counter". There's nowhere in the data that could "notice" the fact that there's, well, more of the person. (Whatever it even means for there to be "more of a person")

Personally, I'm inclined in the opposite direction, that even N separate copies of the same person is the same as 1 copy of the same person until they diverge, and how much difference between is, well, how separate they are.

(Though, of course, those funky Born stats confuse me even further. But I'm fairly inclined toward the "extra copies of the exact same mind don't add more person-ness. But as they diverge from each other, there may be more person-ness. (Though perhaps it may be meaningful to talk about additional fractions of personness rather than just one then suddenly two hole persons. I'm less sure on that.)

Comment author: Nick_Tarleton 25 November 2013 08:35:58PM 0 points [-]

Why not go a step further and say that 1 copy is the same as 0, if you think there's a non-moral fact of the matter? The abstract computation doesn't notice whether it's instantiated or not. (I'm not saying this isn't itself really confused - it seems like it worsens and doesn't dissolve the question of why I observe an orderly universe - but it does seem to be where the GAZP points.)

Comment author: Psy-Kosh 02 December 2013 10:09:23PM 1 point [-]

Hrm... The whole exist vs non exist thing is odd and confusing in and of itself. But so far it seems to me that an algorithm can meaningfully note "there exists an algorithm doing/perceiving X", where X represents whatever it itself is doing/perceiving/thinking/etc. But there doesn't seem there'd be any difference between 1 and N of them as far as that.

Comment author: Nick_Tarleton 08 November 2013 05:04:24AM *  0 points [-]

I wonder if it would be fair to characterize the dispute summarized in/following from this comment on that post (and elsewhere) as over whether the resolutions to (wrong) questions about anticipation/anthropics/consciousness/etc. will have the character of science/meaningful non-moral philosophy (crisp, simple, derivable, reaching consensus across human reasoners to the extent that settled science does), or that of morality (comparatively fuzzy, necessarily complex, not always resolvable in principled ways, not obviously on track to reach consensus).

Comment author: JoshuaZ 04 November 2013 01:35:47AM 8 points [-]

New research suggests that the amount of variance in DNA among individual cells in a person may be much higher than is normally believed. See here.

Comment author: witzvo 04 November 2013 07:01:10PM *  3 points [-]

... researchers isolated about 100 neurons from three people posthumously. The scientists took a high-level view of the entire genome -- looking for large deletions and duplications of DNA called copy number variations or CNVs -- and found that as many as 41 percent of neurons had at least one unique, massive CNV that arose spontaneously, meaning it wasn't passed down from a parent. The CNVs are spread throughout the genome, the team found.

Edit: see the paper for more precise statements.

Comment author: CellBioGuy 05 November 2013 05:19:59AM *  2 points [-]

I've already seen work to the effect that somatic cells often have ~10x the point mutations per human generation as the germline, which is protected by a small number of divisions per generation and low levels of metabolism and transcription. It was in mitochondrial rather than nuclear DNA, but the idea is similar.

Comment author: Joshua_Blaine 06 November 2013 03:00:53AM 6 points [-]

Does anyone here have any serious information regarding Tulpas? When I first heard of them they immediately seemed to be the kind of thing that is obviously and clearly a very bad idea, and may not even exist in the sense that people describe them. A very obvious sign of a persons who is legitimately crazy, even.

Naturally, my first re-reaction is the desire to create one myself (One might say I'm a bit contrarian by nature). I don't know any obvious reason not to (ignoring social stigma and time consuming initial investment), And there may be some advantage to having one, such as parallel focus, more "outside" self analysis, etc. I don't really know much of anything right now, which is why I'm asking if there's been any decent research done already.

Comment author: Armok_GoB 06 November 2013 04:57:57PM 3 points [-]

I've been doing some research (mainly hanging on their subreddit) and I think I have a fairly good idea of how tulpas work and the answers to your questions.

There are a myriad very different things tulpas are described as and thus "tulpas exist in the way people describe them" is not well defined.

There undisputably exist SOME specific interesting phenomena that's the referent of the word Tulpa.

I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer's victim, dolphin, or beloved family pet dog.

I estimates it's ontological status to be similar to a video game NPC, recurring dream character, or schizophrenic hallucination.

I estimate it's power over reality to be similar to a human (with lower intelligence than their host) locked in a box and only able to communicate with one specific other human.

It does not seem deciding to make a tulpa is a sign of being crazy. Tulpas themselves seem to not be automatically unhealthy and can often help their host overcome depression or anxiety. However, there are many signs that the act of making a tulpa is dangerous and can trigger latent tendencies or be easily done in a catastrophically wrong way. I estimate the risk is similar to doing extensive meditation or taking a single largeish dose of LSD. For this reason I have not and will not attempt making one.

I am to lazy to find citations or examples right now but I probably could. I've tried to be a good rationalist and am fairly certain of most of these claims.

Comment author: NancyLebovitz 07 November 2013 01:28:48PM 2 points [-]

Has anyone worked on making a tulpa which is smarter than they are? This seems at least possible if you assume that many people don't let themselves make full use of their intelligence and/or judgement.

Comment author: Armok_GoB 07 November 2013 05:04:34PM 4 points [-]

Unless everything I think I understand about tulpas is wrong, this is at the very least significantly harder than just thinking yourself smarter without one. All the idea generating is done before credit is assigned to either the "self" or the "tulpa".

What there ARE several examples of however are tulpas that are more emotionally mature, better at luminosity, and don't share all their hosts preconceptions. This is not exactly smarts though, or even general purpose formal rationality.

One CAN imagine scenarios where you end up with a tulpa smarter than the host. For example the host might have learned helplessness, or the tulpa being imagined as "smarter than me" and thus all the brains good ideas get credited to it.

Disclaimer: this is based of only lots of anecdotes I've read, gut feeling, and basic stuff that should be common knowledge to any LWer.

Comment author: TheOtherDave 07 November 2013 05:40:46PM 9 points [-]

I'm reminded of many years ago, a coworker coming into my office and asking me a question about the design of a feature that interacts with our tax calculation.

So she and I created this whole whiteboard flowchart working out the design, at the end of which I said "Hrm. So, at a high level, this seems OK. That said, you should definitely talk to Mark about this, because Mark knows a lot more about the tax code than I do, and he might see problems I missed. For example, Mark will probably notice that this bit here will fail when $condition applies, which I... um... completely failed to notice?"

I could certainly describe that as having a "Mark" in my head who is smarter about tax-code-related designs than I am, and there's nothing intrinsically wrong with describing it that way if that makes me more comfortable or provides some other benefit.

But "Mark" in this case would just be pointing to a subset of "Dave", just as "Dave's fantasies about aliens" does.

Comment author: gwern 07 November 2013 08:04:37PM 6 points [-]

See also 'rubberducking' and previous discussions of this on LW. My basic theory is that reasoning was developed for adversarial purposes, and by rubberducking you are essentially roleplaying as an 'adversary' which triggers deeper processing (if we ever get brain imaging of system I vs system II thinking, I'd expect that adversarial thinking triggers system II more compared to 'normal' self-centered thinking).

Comment author: TheOtherDave 07 November 2013 08:21:14PM 3 points [-]

Yes. Indeed, I suspect I've told this story before on LW in just such a discussion.

I don't necessarily buy your account -- it might just be that our brains are simply not well-integrated systems, and enabling different channels whereby parts of our brains can be activated and/or interact with one another (e.g., talking to myself, singing, roleplaying different characters, getting up and walking around, drawing, etc.) gets different (and sometimes better) results.

This is also related to the circumlocution strategy for dealing with aphasia.

Comment author: Kaj_Sotala 02 January 2014 02:29:17PM *  1 point [-]

My basic theory is that reasoning was developed for adversarial purposes

Obligatory link.

Comment author: Armok_GoB 07 November 2013 06:49:06PM *  1 point [-]

Yea in that case presumably the tulpa would help - but not necessarily significantly more than such a non-tulpa model that requires considerably less work and risk.

Basically, a tulpa can technically do almost anything you can... but the absence of a tulpa can do them to, and for almost all of them there's some much easier and at least as effective way to do the same thing.

Comment author: ChristianKl 08 November 2013 04:04:09PM 0 points [-]

Basically, a tulpa can technically do almost anything you can...

Mental process like waking up without an alarm clock at a specific time aren't easy. I know a bunch of people who have that skill but it's not like there a step by step manual that you can easily follow that gives you that ability.

A tulpa can do things like that. There are many mental processes that you can't access directly but that a tulpa might be able to access.

Comment author: Armok_GoB 08 November 2013 05:22:40PM 1 point [-]

I am surprised to know there isn't such a step by step manual, suspect that you're wrong about there not being one, and in either case know about a few people that could probably easily write one if motivated to do so.

But I guess you could make this argument; that a tulpa is more flexible and has a simpler user interface, even if it's less powerful and has a bunch of logistic and moral problems. I dont like it but I can't think of any counter arguments other than it being lazy and unaesthetic, and the kind of meditative people that make tulpas should not be the kind to take this easy way out.

Comment author: ChristianKl 09 November 2013 05:39:00AM 2 points [-]

I am surprised to know there isn't such a step by step manual, suspect that you're wrong about there not being one, and in either case know about a few people that could probably easily write one if motivated to do so.

My point isn't so much that it impossible but that it isn't easy.

Creating a mental device that only wakes me up will be easier than creating a whole Tupla but once you do have a Tulpa you can reuse it a lot.

Let's say I want to practice Salsa dance moves at home. Visualising a full dance partner completely just for the purpose of having a dance partner at home wouldn't be worth the effort.

I'm not sure about how much you gain by pair programming with a Tulpa, but the Tulpa might be useful for that task.

It takes a lot of energy to create it the first time but afterwards you reap the benefits.

I dont like it but I can't think of any counter arguments other than it being lazy and unaesthetic, and the kind of meditative people that make tulpas should not be the kind to take this easy way out.

Tulpa creation involves quite a lot of effort so it doesn't seem the lazy road.

Comment author: Armok_GoB 09 November 2013 04:29:04PM 0 points [-]

Hmm, you have a point, I hadn't thought about it that way. If it wasn't so dangerous I would have asked you to experiment.

Comment author: hesperidia 02 December 2013 04:00:36AM *  0 points [-]

Mental process like waking up without an alarm clock at a specific time aren't easy. I know a bunch of people who have that skill but it's not like there a step by step manual that you can easily follow that gives you that ability.

I do not have "wake up at a specific time" ability, but I have trained myself to have "wake up within ~1.5 hours of the specific time" ability. I did this over a summer break in elementary school because I learned about how sleep worked and thought it would be cool. Note that you will need to have basically no sleep debt (you consistently wake up without an alarm) for this to work correctly.

The central point of this method is this: a sleep cycle (the time it takes to go from a light stage of sleep to the deeper stages of sleep and back again) is about 1.5 hours long. If I am not under stress or sleep debt, I can estimate my sleeping time to the nearest sleep cycle. Using the sleep cycle as a unit of measurement lets me partition out sleep without being especially reliant on my (in)ability to perceive time.

The way I did it is this (each step was done until I could do it reliably, which took up to a week each for me [but I was a preteen then, so it may be different for adults]):

  1. Block off approximately 2 hours (depending on how long it takes you to fall asleep), right after lunch so it has the least danger of merging with your consolidated/night sleep, and take a nap. Note how this makes you feel.

  2. Do that again, but instead of blocking off the 2 hours with an alarm clock, try doing it naturally, and awakening when it feels natural, around the 1.5h mark (repeating this because it is very important: you will need to have very little to no accumulated sleep debt for this to work). Note how this makes you feel.

  3. Do that again, but with a ~3.5-hour block. Take two 1.5 hour sleep cycle naps one after another (wake up in between).

  4. During a night's sleep, try waking up between every sleep cycle. Check this against [your sleep time in hours / 1.5h per sleep cycle] to make sure that you caught all of them.

  5. Block off a ~3.5 hour nap and try taking it as two sleep cycles without waking up in between them. (Not sure about the order with this point and the previous one. Did I do them in the opposite order? I'm reconstructing from memory here. It's probably possible to make this work in either order.)

  6. You probably know from step 4 how many sleep cycles you have in a night. Now you should be able to do things like consciously split up your sleep biphasically, or waking up a sleep cycle earlier than you usually do.

I then spent the rest of summer break with a biphasic "first/second sleep" rhythm, which disappeared once I was in school and had to wake up at specific times again.

To this day, I sleep especially lightly, must take my naps in 1.5 hour intervals, and will frequently wake up between sleep cycles (I've had to keep a clock on my nightstand since then so I can orient myself if I get woken unexpectedly by noises, because a 3:30AM waking is different from a 5AM waking, but they're at the same point on the cycle so they feel similar). I also almost always wake up 10-45 minutes before any set alarms, which would be more useful if the spread was smaller (45 minutes before I actually need to wake up seems like a waste). It's a cool skill to have, but it has its downsides.

Comment author: TheOtherDave 07 November 2013 07:57:55PM 0 points [-]

a tulpa can technically do almost anything you can...

Yes, I would expect this.
Indeed, I'm surprised by the "almost" -- what are the exceptions?

Comment author: Armok_GoB 07 November 2013 09:26:58PM 0 points [-]

Anything that requires you using your body and interacting physically with the world.

Comment author: TheOtherDave 07 November 2013 09:33:58PM 0 points [-]

I'm startled. Why can't a tulpa control my body and interact physically with the world, if it's (mutually?) convenient for it to do so?

Comment author: Armok_GoB 07 November 2013 09:44:54PM 0 points [-]

Well if you consider that the tulpa doing it on it's own then no I can't think of any specific exceptions. Most tulpas can't do that trick though.

Comment author: TheOtherDave 06 November 2013 05:11:42PM 2 points [-]

I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer's victim, dolphin, or beloved family pet dog.

Would you classify a novel in the same "moral-status" tier as these four examples?

Comment author: Armok_GoB 06 November 2013 09:49:39PM 0 points [-]

No, thats much much lower. As in torture a novel for decades in order to give a tulpa a quick amusement being a moral thing to do lower.

Assuming you mean either a physical book, or the simulation of the average minor character in the author's mind, here. Main characters or RPing PCs can vary in complexity of simulation from author to author a lot and it's a theory that some become effectively tulpas.

Comment author: TheOtherDave 06 November 2013 10:11:45PM 1 point [-]

Your answer clarifies what I was trying to get at with my question but wasn't quite sure how to ask, thanks; my question was deeply muddled.

For my own part, treating a tulpa as having the moral status of an independent individual distinct from its creator seems unjustified. I would be reluctant to destroy one because it is the unique and likely-unreconstructable creative output of a human being, much like I would be reluctant to destroy a novel someone had written (as in, erase all copies of such that the novel itself no longer exists), but that's about as far as I go.

I didn't mean a physical copy of a novel, sorry that wasn't clear.

Yes, destroying all memory of a character someone played in an RPG and valued remembering I would class similarly.

But all of these are essentially property crimes, whose victim is the creator of the artwork (or more properly speaking the owner, though in most cases I can think of the roles are not really separable), not the work of art itself.

I have no idea what "torture a novel" even means, it strikes me as a category error on a par with "paint German blue" or "burn last Tuesday".

Comment author: ChristianKl 08 November 2013 04:05:18PM *  0 points [-]

What do you think about the moral status of torturing an uploaded human mind that's in silicon?

Does that mind have a different moral status than one in a brain?

Comment author: TheOtherDave 08 November 2013 04:10:45PM 2 points [-]

Certainly not by virtue of being implemented in silicon, no. Why do you ask?

Comment author: hylleddin 08 November 2013 01:40:30AM *  1 point [-]

As someone with personal experience with a tulpa, I agree with most of this.

I estimates it's ontological status to be similar to a video game NPC, recurring dream character, or schizophrenic hallucination.

I agree with the last two, but I think a video game NPC has a different ontological status than any of those. I also believe that schizophrenic hallucinations and recurring dream characters (and tulpas) can probably cover a broad range of ontological possibilities, depending on how "well-realized" they are.

I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer's victim, dolphin, or beloved family pet dog.

I have no idea what a tulpa's moral status is, besides not less than a fictional character and not more than a typical human.

I estimate it's power over reality to be similar to a human (with lower intelligence than their host) locked in a box and only able to communicate with one specific other human.

I would expect most of them to have about the same intelligence, rather than lower intelligence.

Comment author: Armok_GoB 08 November 2013 05:05:10PM 0 points [-]

You are probably counting more properties things can vary under as "ontological". I'm mostly doing a software vs. hardware, need to be puppeteered vs. automatic, and able to interact with environment vs. stuck in a simulation, here.

I'm basing the moral status largely on "well realized", "complex" and "technically sentient" here. You'll notice all my example ALSO has the actual utility function multiplier at "unknown".

Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it's power over reality.

Comment author: hylleddin 08 November 2013 10:43:03PM 1 point [-]

Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it's power over reality.

Ah. I see what you mean. That makes sense.

Comment author: TheOtherDave 06 November 2013 03:03:26AM *  3 points [-]
Comment author: Joshua_Blaine 06 November 2013 01:54:38PM *  1 point [-]

I had not, actually. The link you've given just links me to Google's homepage, but I did just search LW for "Tulpa" and found it fine, so thanks regardless.

edit: The link's original purpose now works for me. I'm not sure what the problem was before, but it's gone now.

Comment author: IlyaShpitser 08 November 2013 03:48:56PM 2 points [-]

Well, if you think that the human illusion of unified agency is a good ideal to strive for, it then seems that messing around w/ tulpas is a bad thing. If you have really seriously abandoned that ideal (very few people I know have), then knock yourself out!

Comment author: Tenoke 06 November 2013 05:56:44PM 2 points [-]

I don't know any obvious reason not to

What is stopping is me is the possibility that I will be potentially permanently relinquishing cognitive resources for the sake of the Tulpa.

Comment author: Ishaan 09 November 2013 11:14:53PM *  1 point [-]

There's tons of easily discovered information on the web about it.

I'm not sure the Tulpa-crowed would agree with this, but I think a non-esoteric example of Tulpas in everyday life is how some religious people say that God really speaks and appears to them. The "learning process" and stuff seem pretty similar - the only difference I can see is that in the case of Tulpas it is commonly acknowledged that the phenomenon is imaginary.

Come to think of it, that's probably a really good method for creating Tulpas quickly - building off a real or fictional character for whom you already have a relatively sophisticated mental model. It's probably also important that you are predisposed to take seriously the notion that this thing might actually be an agent which interacts with you...which might be why God works so well, and why the Tulpa-crowed keeps insisting that Tulpas are "real" in the sense that they carry moral weight. It's an imagination-belief driven phenomenon.

It might also illustrate some of the "dangers" - for example, some people who grew up with notions of the angry sort of God might always feel guilty about certain "sinful" things which they might not intellectually feel are bad.

I've also heard claims of people who gain extra abilities / parallel processing / "reminders" with Tulpas....basically, stuff that they couldn't do on their own. I don't really believe that this is possible, and if this were demonstrated to me I would need to update my model of the phenomenon. To the tupla-community's credit, they seem willing to test the belief.

Comment author: Lumifer 06 November 2013 09:44:11PM *  0 points [-]

I don't know any obvious reason not to

A fairly obvious reason is that to generate a tulpa you need to screw up you mind in a sufficiently radical fashion. And once you do that, you may not be able to unfuck it back to normal.

I vaguely recall (sorry, no link) reading a post by a psychiatrist who said that creating tulpas is basically self-induced schizophrenia. I don't think schizophrenia is fun.

Comment author: Adele_L 07 November 2013 08:40:26PM 1 point [-]

A fairly obvious reason is that to generate a tulpa you need to screw up you mind in a sufficiently radical fashion. And once you do that, you may not be able to unfuck it back to normal.

This is a concern I share. However...

I vaguely recall (sorry, no link) reading a post by a psychiatrist who said that creating tulpas is basically self-induced schizophrenia. I don't think schizophrenia is fun.

This is the worst argument in the world.

Comment author: Vulture 08 November 2013 04:56:25AM *  0 points [-]

Welp, look at that, I just found this thread after finishing up a long comment on the subject in an older thread. Go figure. (By the way, I do recommend reading that entire discussion, which included some actual tulpas chiming in).

Comment author: CronoDAS 02 November 2013 11:59:18PM 6 points [-]

If I made a game in RPG Maker, would anyone actually play it?

::is trying to decide whether or not to attempt a long-term project with uncertain rewards::

Comment author: lmm 03 November 2013 01:10:48AM *  13 points [-]

Only if I heard particularly good things about it.

Most creative endeavors you could undertake have a very small chance of leading to external reward, even the validation of people reading/watching/playing them - there's simply too much content available these days for people to read yours. So I'd advise against making such a thing, unless you find making it to be rewarding enough in itself.

Comment author: CronoDAS 03 November 2013 02:06:14AM -2 points [-]

Would you have given Alicorn the same advice if she asked for it before writing "Luminosity"?

Comment author: lmm 03 November 2013 05:07:47PM 6 points [-]

Yes. Do you think I would have been wrong?

Comment author: ChristianKl 03 November 2013 07:56:09AM 4 points [-]

What do you hope to achieve? Making money through selling the game? Artistic expression? Pushing memes?

Comment author: CronoDAS 03 November 2013 03:59:35PM 5 points [-]

My underlying motivation is to feel better about myself. I feel that my life so far has lacked meaningful achievements. Pushing memes is a side benefit.

I do not expect to make money by selling the game, but if I do manage to make something that turns out to be pretty good, I think it would be a big help in getting a job in the video game industry.

Comment author: Protagoras 03 November 2013 12:45:52AM 1 point [-]

I've played several RPG maker games made by amateurs. Some of them seemed to have significant followings, though I wasn't interested enough to make a serious effort to estimate the numbers, since I wasn't the creator. What kind of game were you thinking of making?

Comment author: CronoDAS 03 November 2013 02:27:21AM 23 points [-]

I have a game I've been fantasizing about and I think I could make it work. It has to be a game, not a story, because I want to pull a kind of trick on the player. It's not that unusual in fiction for a character to start out on the side of the "bad guys", have a realization that his side is the one that's bad, and then go on to save the day. (James Cameron's Avatar is a recent example.) I want to start the player out on the side of bad guys that appear good, as in Eliezer's short story "The Sword of Good", and then give the player the opportunity to fail to realize that he's on the wrong side. There would be two main story branches: a default one, and one that the player can only get to by going "off-script", as it were, and not going along with what it seems like you have to do to continue the story. (At the end of the default path, the player would be shown a montage of the times he had the chance to do the right thing, but chose not to.)

The actual story would be something like the anti-Avatar; a technological civilization is encroaching on a region inhabited by magic-using, nature-spirit-worshiping nomads. The nature spirits are EVIL (think: "nature, red in tooth and claw") and resort to more and more drastic measures to try to hold back the technological civilization, in which people's lives are actually much better.

Does this sound appealing?

Comment author: Risto_Saarelma 03 November 2013 08:54:38AM 9 points [-]

That sounds fun, and something that'd actually translate nicely to the RPG Maker template. It's also something that takes skill to pull off well, you'll need to play with how the player will initially frame the stuff you show to be going on, and how the stuff should actually be interpreted. Not coming off as heavy-handed is going to be tricky. Also, pulling this off is based on knowing how to use the medium, so if this is the first RPG Maker thing you're going to be doing, it's going to be particularly challenging.

There might also be a disconnect between games and movies here. Movies tend to always go out of their way to portray the protagonist's side as good, while games have a lot more of just semi-symmetric opposing factions. You get to play as the kill-happy Zerg or Undead Horde, and nobody pretends you're siding with the noble savages against the inhuman oppressors. So the players might just go, "ooh, I'm the Zerg, cool!" or "I guess I'm supposed to defect from Zerg to Terran here".

Random other thoughts, Battlezone 2 has a similar plot twist with off-script player action needed, though both factions are high-tech. Dominions 4 has Asphodel that's a neat corrupted nature spirit faction. Though I'm guess you're going for nature just being inherent bastards instead of the more common corrupted nature striking back trope.

Also, games really train people to stay on the script nowadays. Games that let you go rogue with an actual in-game-world action instead of choosing 'yes' on the blinking "DEFECT TO TERRAN SIDE" dialog are rare, since letting the player go off the script in-game and meaningfully interpreting their actions is really hard in the general case, and really frustrating for the player if they have to guess the particular special case where the off-script action actually opens a different plot branch instead of just leading nowhere like it did in the 10 previous levels. The original Deus Ex did have bits where you could mitigate the shit your early game actually evil employers were pulling with quick in-game thinking, but going over to the rebels was still always in the script.

So, overall, challenging project. You need to figure out RPG Maker and where to get the art assets and such, if you're not already skilled with it, you need to do worldbuilding for two worlds, and neither can be a cardboard cutout for the conceit to work, and you need to figure out how to make the game narration work so that the player can both get effectively tricked and has all the necessary pieces to put together the alternative choice during play.

Comment author: philh 03 November 2013 03:29:42PM *  5 points [-]

Also, games really train people to stay on the script nowadays.

When I played Zelda games, I would always work out what option I was supposed to take, then take the other one, confident that I would get to see a few extra lines of dialogue before being presented with the same option again.

(I say "always", but when I first played, I would carefully make the correct choice, for fear that something bad would happen if I didn't agree to help Zelda. I don't remember when I developed the opposite habit.)

Comment author: CronoDAS 03 November 2013 02:26:59PM *  1 point [-]

Yeah, it'll be hard. Right now I haven't worked out much more than the basic concept; I'd have a lot of writing to do, in addition to level design, learning RPG Maker, and so on.

As for art, RPG Maker does come with some built-in art and offers some more in expansion packs. If I have to, I can use placeholder art from the built-in assets and find some way to replace it once I'm happy with everything else.

Comment author: Risto_Saarelma 03 November 2013 03:02:06PM *  2 points [-]

Have you thought about how much time you are ready to put into the project? I'd ballpark the timescale for this as at least two years if you work on this alone, aren't becoming a full-time game developer and want to put a large-scale competent CRPG together.

EDIT: I'm guessing this would look like something like what Zeboyd Games puts out. They had a two-man team working full-time and took three months to make the short and simple Breath of Death. Didn't manage to find information on how long their more recent bigger games took to develop, but they seem to have released around one game a year since.

Comment author: CronoDAS 03 November 2013 04:50:06PM *  3 points [-]

Honestly, I'd probably start by trying to throw something much simpler together with RPG Maker, just to learn the system and see what it's like. And I don't actually have a "real job", so the amount of time I spend is mostly limited by my own patience.

And using RPG Maker might help speed up the technical work.

Comment author: Moss_Piglet 04 November 2013 03:31:47AM 7 points [-]

I like the idea, mainly because I spent most of Avatar rooting for Quarditch (easily the biggest badass in the last decade of cinema), but it seems like there's another way to do it that might have a bit more power;

Why not have them both be "right," according to their own value systems anyway, and then have the end-game slideshow in both branches tell the player the story of what they did from the perspective of the other side?

In terms of workload, it seems minimal; from a story perspective you already need both sides to have sympathetic and unsavory elements anyway, while from a design perspective all you need to add is a second set of narration captions for the slideshow contingent on which side the player supported.

And in terms of appeal, it certainly seems more engaging than most AAA games. Spec Ops: the Line proved that players are masochists and that throwing guilt trips at them is a great way to get sales and good reviews, while Mass Effect 3's failure shows that genuine choice in endings is pretty important for a game built on moral choice.

Comment author: CronoDAS 05 November 2013 03:19:43AM 1 point [-]

Why not have them both be "right," according to their own value systems anyway, and then have the end-game slideshow in both branches tell the player the story of what they did from the perspective of the other side?

This would ruin the point I'm trying to make.

Comment author: Viliam_Bur 05 November 2013 09:19:48AM 3 points [-]

You don't have to make both branches equivalent. Both of them could feel "right" from inside, but only one of them could contain an information which makes the other one wrong.

In one ending, the hero only has limited information, and based on that limited information, the hero thinks they made the right choice. Sure, some things went wrong, but the hero considers that a necessary evil.

On another ending, the hero has more information, and now it is obvious that this choice was right, and all the good feelings from the other branch are merely lack of information or reasoning.

This way, if you only saw the first ending, you would think it is the good one, but if you saw both of them, it would be obvious the second one is the good one.

Comment author: drethelin 06 November 2013 07:54:04PM 3 points [-]

I like this idea but it seems hard to differentiate between "You did what you thought was right but you need to be more careful about what you believe" and "you got the bad ending because you missed this little thing", which is something many games have done before.

An example is Iji where the game plays out significantly differently if you make a moral decision not to kill, but if you take the default path it doesn't let you know you could've chosen to be peaceful the whole time. It involves an active decision rather than a secret thing you can miss, but it also doesn't frame it as a "MORAL CHOICE TIME GO"

Comment author: Armok_GoB 04 November 2013 02:19:04AM 4 points [-]

That sounds awesome... except now that I know about that twist it's ruined. And if you publish it under a different name and don't reveal it it wont sound awesome so I'll never discover it.

The only way to do this justice would be nagging enough people to play it that they can insist that it's better than it sounds and someone should really play it for reasons they can't spoil.

Comment author: CronoDAS 05 November 2013 03:11:14AM *  1 point [-]

/me shrugs

For some reason, people still like games such as Bioshock and Spec Ops: The Line after knowing about their twists...

Comment author: passive_fist 03 November 2013 05:17:11AM 2 points [-]

It sounds very appealing to me, but as KaynanK pointed out, you have to be very careful about keeping the twist secret. To this end, I'd suggest not revealing to the players that they could have gone off-script, unless they do.

Comment author: KaynanK 03 November 2013 03:53:10AM 2 points [-]

It seems like an interesting story idea, but, of course, the twist can't be revealed to any prospective player without spoiling it, so it might seem cliched on the surface.

Comment author: Lumifer 04 November 2013 04:13:49AM 1 point [-]

Does this sound appealing?

Well, that's just the twist idea, but what's your framework? Are you thinking about first-person shooters (Deus Ex style, for example) or about tactical turn-based RPGs or about 2-D platformers or what?

Comment author: CronoDAS 05 November 2013 02:57:03AM 0 points [-]

RPG Maker, by default, makes games that look like SNES-era JRPGs.

Comment author: Viliam_Bur 03 November 2013 05:35:08PM *  15 points [-]

So I get home from a weekend trip and go directly to the HPMOR page. No new chapter yet. But there is a link to what seems to be a rationalist Death Note.

The way he saw it, the world was a pretty awful place. Corrupt politicians, cruel criminals, evil CEOs and even day-to-day evil acts made it that way, but everyday stupidity ensured it would stay like that. Nobody could make even a simple utility calculation. The only saving grace was that this was as true for the villains as for the heroes.

I am going to read it. Here are my next thoughts:

So, it seems like Eliezer succeeded to create a whole new genre of literature: rationalist fiction. Nice job!

Wait, what?! Is "a story where the protagonist behaves rationally" really a new genre of literature? There is something horribly wrong with this world if this is true.

Discussing with my girlfriend about which stories should be x-rationalizated next, she suggests HPMOR. Someone should make a HPMOR fanfic where the protagonist is even more rational than the rational Harry. Would that lead to a spiral of even more and more rational heroes?

What exactly could the MoreRational!Harry do? It would be pretty awesome if he could somehow deduce the existence of magic before he was contacted from Hogwarts. For example, he could start doing some research about his biological parents; after realizing they were killed he could try to find out the villain, and gradually discover the existence of magic.

Only one problem: MoreRational!Voldemort would have killed MoreRational!Harry as a baby. Using a knife.

Comment author: [deleted] 03 November 2013 06:57:57PM *  21 points [-]

Discussing with my girlfriend about which stories should be x-rationalizated next, she suggests HPMOR. Someone should make a HPMOR fanfic where the protagonist is even more rational than the rational Harry.

An idea came to my mind. Would it be possible to make a story in which Harry is less intelligent, in a way that he would score less in an IQ test for example, but at the same time more rational? HJPEV seems to be a highly intelligent prodigy even without the rationality addition. I would like to see how a more normal boy would do.

Comment author: lmm 05 November 2013 03:59:41PM 0 points [-]

One could argue that he appears intelligent only because he's spent his life so far learning effectively.

Comment author: gattsuru 05 November 2013 04:51:34PM 1 point [-]

Rationalist!Harry is calibrated to match the knowledge and recall of a 34-year-old autodidact. Even presuming a very friendly environment and that said 34-year-old autodidact's training was not optimal, I just don't think there's enough time.

I can buy a 10 year old reading Ender's Game and The Lord of the Rings and maybe even Lensmen. It's a bit harder to imagine one that would consider wanting to want the math behind proving N=NP, nevermind going further than that.

Comment author: CAE_Jones 05 November 2013 05:17:41PM 1 point [-]

I believe it's been stated somewhere that EY draws primarily on the skills he had around 18 and intentionally keeps things from beyond that out of Harry's reach. So Harry is more like a brilliant high school student than an adult (and, extra seven years worth of rationalist training aside, the way he approaches problems is a lot like a middle schooler with superpowers: "I can win, you can't, deal with it, 'cause I'm awesome and you know it." Which manages to annoy everyone in-universe and out.). Time isn't really a problem, either, if Harry has nothing else to occupy his time; exercise and social interaction are apparently not his thing, and he wound up out of the public school system after a few years, so he really does have way more time than most kids his age to read all the books. And he has that mysterious dark side and that sleeping disorder, whatever those contribute.

The other strangely adult-like children, however, are not so easily justified. (Draco gets most of those complaints, from what I've read.)

Comment author: lmm 05 November 2013 08:27:50PM 0 points [-]

I wanted the maths behind relativity and QM at age 10. And I wasted a lot of time in school.

Comment author: fubarobfusco 03 November 2013 06:20:53PM *  19 points [-]

Is "a story where the protagonist behaves rationally" really a new genre of literature?

I think what you are referring to here is "a story where the protagonist describes their actions and motivations using rationality terminology" or maybe "a story where the rational thinking of the protagonist motivates the plot or moves it along". At least some of the genre of detective fiction — early examples being Poe's Auguste Dupin stories — would be along these lines.

Stories where protagonists behave rationally (without using rationality terrminology) wouldn't look like stories about rationality. They look like stories where protagonists do things that make sense.

Comment author: MathiasZaman 04 November 2013 09:44:49AM *  6 points [-]

Wait, what?! Is "a story where the protagonist behaves rationally" really a new genre of literature?

I think there's a difference between what I've been describing as rationalist!fic (or rationalist!fiction) and fiction in which the -agonists (PCs is the right terminology, I guess) are rational/clever. Rationalist!fic doesn't just feature rationalist characters, they're expressively written to teach the audience about rationality.

Examples:

  • Doctor Who features a sufficiently advanced alien who is, within the rules of the universe, pretty rational (in that he is good at reaching his goals). The message of the show however, is not: "be clever and rational," it's: "humanity is awesome and you should feel some wonder about the universe." Not rationalist!fic.
  • The Conqueror's Shadow, by Ari Marmell features rationalist agonists and the message the audience goes away with is: "be clever and creative when it comes to reaching worthwhile goals." Rationalist!fic.
Comment author: ChrisHallquist 04 November 2013 01:10:37AM 9 points [-]

Wait, what?! Is "a story where the protagonist behaves rationally" really a new genre of literature? There is something horribly wrong with this world if this is true.

Yup. At least sort-of. If you haven't read Eliezer's old post Lawrence Watt-Evans's Fiction I recommend it. However, conspicuous failures of rationality in fiction may be mostly an issue with science fiction and fantasy. If you want to keep the characters in your cop story from looking like idiots, you can do research on real police methods, etc. and if you do it right, you have a decent shot at writing a story that real police officers will read without thinking your characters are idiots.

On the other hand, when an author is trying to invent an entire fictional universe, with futuristic technology and/or magic, it can be really hard to figure out what would constitute "smart behavior" in that universe. This may be partly because most authors aren't themselves geniuses, but even more importantly, the fictional universe, if it were real, would have millions of people trying to figure out how to make optimal use of the resources that exist in that universe. It's hard for one person, however smart, to compete with that.

For that matter, it's hard for one author to compete with an army of fans dissecting their work, looking for ways the characters could have been smarter.

Comment author: DanielLC 05 November 2013 04:56:18AM 3 points [-]

Erfworld is a piece of rationalist fiction not related to HP:MoR. It was discussed on here a while back. There must be others.

Also, I suggest calling it Rational!Rational!Harry.

Comment author: Ishaan 05 November 2013 01:18:12AM *  6 points [-]

which stories should be x-rationalizated next

This leads to another comment on rationalist fiction: Most of it seems to be restricted to fan-fiction. The mold appears to be: "Let's take a story in which the characters underutilized their opportunities and bestow them with intelligence, curiosity, common sense, creativity and genre-awareness". The contrast between the fanfic and the canon is a major element of the story, and the canon an existing scaffold which saves the writer from having to create a context.

This isn't a bad thing necessarily, just an observation.

Wait, what?! Is "a story where the protagonist behaves rationally" really a new genre of literature?

So, the question becomes, how do you recognize "rationalist" stories in non-fan-fic form? Is it simply the presence of show-your-work-smart characters? Is simply behaving rationally sufficient?

Every genre has a theme...romance, adventure, etc.

So where are the stories which are, fundamentally, about stuff like epistemology and moral philosophy?

Comment author: MathiasZaman 05 November 2013 09:55:32AM 1 point [-]

So, the question becomes, how do you recognize "rationalist" stories in non-fan-fic form? Is it simply the presence of show-your-work-smart characters? Is simply behaving rationally sufficient?

Every genre has a theme...romance, adventure, etc.

I'd say the difference between "rationalists" stories and "non-rationalist" stories lies in the moral of the story, of the lessons the story teaches you.

I don't think it's a genre in the same way romance or adventure are. It's more of a qualifier. You can have rationalist romance novels or rationalist adventure movies.

Although you could argue that it is a genre. While discussions about "genre" are often hard, since people don't tend to agree on what makes something a genre.

But rationalist fiction already has a couple of genre conventions, such as no-one being allowed to hold the idiot ball or teaching the audience new and useful techniques for overcoming challenges.

Comment author: Viliam_Bur 05 November 2013 09:28:02AM *  0 points [-]

So, the question becomes, how do you recognize "rationalist" stories in non-fan-fic form? Is it simply the presence of show-your-work-smart characters? Is simply behaving rationally sufficient?

That's a great question. (And related to how to recognize rational people in real life.)

I'd say that there must be some characters which are obviously smarter than most people around them. Because that's what happens in real life: there is the bell curve, so if all your characters are on a similar level, then the story is (a) not realistic, (b) the characters are selected by some intelligence filter which should be explicitly mentioned, or (c) the characters are all from the middle of the bell curve. Also, in real life the relative power of intelligent people is often reduced by compartmentalization, but this reduction would be much smaller for a rationalist hero.

So I'd say it's behaving rationally while most of other people aren't. The character should somehow reflect on the stupidity of others; whether by frustration from their inability to cooperate, or by enjoyment of how easily they are manipulated.

Comment author: Ishaan 05 November 2013 06:08:05PM *  2 points [-]

The character should somehow reflect on the stupidity of others; whether by frustration from their inability to cooperate, or by enjoyment of how easily they are manipulated.

I'm not sure I like that criteria. By that criteria alone, the original death note anime was rationalist fiction (judging by the first half), as is Artemis Fowl, Ender's Game, and to some extent even Game of Thrones. There are a lot of stories where some characters are much smarter than others and know it, but consuming these works won't teach anyone how to be smarter. (Other than the extent to which reading good fiction in general improves various things)

None of these stories actually teach the reader anything about epistemology. Even the linked Death Note fan fic...it uses rationality-associated words like "utility" and "prior" but if I didn't already know what those words meant I would have just come away confused. (Granted, it's still early in the story - but even so)

Also, it hasn't yet broken the conceit of the story (For example, even a normal person of average intelligence would be surprised and curious about the existence of the supernatural, and would investigate that). I'd say that breaking the story conceit is another feature of rationalist fanfiction stories that has nothing to do with the character's intelligence.

Comment author: Viliam_Bur 05 November 2013 08:02:28PM 1 point [-]

Well, I was disappointed with the Death Note fan fic, because it doesn't seem to have added value beyond the original story. And I agree that exploring the supernatural should be a high priority for a rational person, once the supernatural is experimentally proven. Would it be so difficult to ask Ryuk whether there are additional magical items that could also be abused? I guess Ryuk would use an excuse of having "rules" against that, but at least it's worth trying.

Having a rational superhero is a necessary condition for a rationalist story, not a sufficient condition. Ender's Game could be a rationalist literature if it explained Ender's reasoning better, and if Ender strategically tried to improve his understanding of the world. Okay, another necessary condition is not just that the superhero is super smart, but also that the super smartness is at least partially a result of a good strategy, which is shown to the reader.

Comment author: gattsuru 05 November 2013 05:19:03AM *  1 point [-]

In addition to the other's already listed, DataPacRat's Myou've Got To Be Kidding Me follows a perspective of a character thrown into a setting and trying to analyze the basic rules in order to optimize them. There are some interesting concepts, but I don't know that I can recommend it : It has not been updated in over a year, and was part of some big conglomeration of fanfic writers which had some pretty widely varying quality (although thankfully nothing necessary to Myou've plotline).

Comment author: maia 04 November 2013 02:02:43AM 0 points [-]

MoreRational!Voldemort would have killed MoreRational!Harry as a baby. Using a knife.

Fbzr sna gurbevrf ubyq gung Dhveeryzbeg vf hfvat Ibyqrzbeg nf n chccrg vqragvgl va beqre gb tnva cbjre. Fb Ibyqrzbeg'f erny tbny vfa'g gb xvyy Uneel; vg'f gb unir n qenzngvp fubjqbja gung trgf ybgf bs nggragvba naq fpnerf crbcyr.

Guhf gur snpg gung Ibyqrzbeg qvqa'g xvyy Uneel jvgu n xavsr vf abg orpnhfr ur'f abg engvbany rabhtu, ohg orpnhfr ur unf aba-boivbhf tbnyf.

Comment author: hyporational 04 November 2013 03:14:17PM *  1 point [-]

Wait, what?! Is "a story where the protagonist behaves rationally" really a new genre of literature? There is something horribly wrong with this world if this is true.

I get your sentiment, but I don't think this is true. Anyways, wouldn't this just mean that rational minds usually pursue other goals than writing fiction? Not saying that there shouldn't be rationalist fiction, but this doesn't sound like such a bad state of affairs to me.

I haven't read HPMOR. Do I have to know anything about the HP universe to enjoy this thing? Will I learn anything new if I've read the sequences?

Comment author: Viliam_Bur 04 November 2013 03:55:19PM *  3 points [-]

I guess you don't need to know anything from the HP canon. It could perhaps be even more interesting that way. I don't think you would learn new information. It might have a better emotional impact, but that is difficult to predict.

wouldn't this just mean that rational minds usually pursue other goals than writing fiction? Not saying that there shouldn't be rationalist fiction, but this doesn't sound like such a bad state of affairs to me.

I would consider the world better if there were more rational people sharing the same values as me. We could cooperate on mutual goals, and learn from each other.

Problem is, rational people don't just appear randomly in the world. Okay, sometimes they do, but the process if far from optimal. If there is a chance to make rationality spread more reliable, we should try.

But we don't exactly know how. We tried many things, with partial success. For example the school system -- it is great in taking an illiterate peasant population and producing an educated population within a century. But it has some limits: students learn to guess their teachers' passwords, there are not enough sufficiently skilled teachers, the pressure from the outside world can bring religion to schools and prevent teaching evolution, etc. And the system seems difficult to improve from inside (been there, tried that).

Spreading rationality using fiction is another thing worth trying. There is a chance to attract a lot of people, make some of them more rational, and create a lot of utility. Or maybe despite there being dozens of rationalist fiction stories, they would all be read by the same people; unable to attract anyone outside of the chosen set. I don't know.

The point is, if you are rational and you think the world would be better with more rational people... it's one problem you can try to solve. So before Eliezer we had something like the Drake equation: how many people are rational × how many of them think making more people rational is the best action × how many of them think fiction is the best tool for that = almost zero. I am curious about the specific numbers; especially whether one of them is very close to zero, or whether it's merely a few small numbers that give almost zero result when multiplied together.

Comment author: hyporational 06 November 2013 02:38:11AM 1 point [-]

I'd probably want more people who share my values than more rational people. Rational people who share my values is better. Rational people who don't share my values would be the worst outcome.

I don't think the school system was built by rationalists, so I'm not sure where you were going with that example.

How effective has fiction been in spreading other ideas compared to other methods?

Comment author: ChristianKl 04 November 2013 12:25:39PM 1 point [-]

Only one problem: MoreRational!Voldemort would have killed MoreRational!Harry as a baby. Using a knife.

Given that the spell never failed in the past, I'm not sure that it would have been rational to use a knife.

Comment author: NancyLebovitz 05 November 2013 11:10:39PM 14 points [-]

New ligament discovered in the human knee as a result of surgeons trying to figure out why some people didn't recover fully after knee injuries.

I'm tempted to deduce "Keep paying attention, you never know what might have been missed"-- I really would have expected that all the ligaments had been discovered a long time ago.

Another conclusion might be "Try to solve real problems, you're more likely to find out something new that way than by just poking around."

Comment author: [deleted] 07 November 2013 01:31:33AM 7 points [-]

Does someone have the medical knowledge to explain how this is possible? My layperson guess is that once cut up a knee, you can more or less see all the macroscopic structures. Did they just think it was unimportant?

Comment author: NancyLebovitz 07 November 2013 01:24:57PM 3 points [-]

My layperson guess is that once you're told what to expect to see, you stop looking.

This makes Eliezer's weirdtopia idea of science being kept secret so as not to spoil people's fun of discovery more interesting-- it's not just that people would independently discover the same things (and I wonder what the protocol for sharing information would be), given enough time and intelligence, much more might get discovered.

Comment author: NancyLebovitz 07 November 2013 02:03:22PM 1 point [-]

Someone who seemed a bit better informed

Could be a few things - looks like part of one of the other ligaments, is usually damaged doing a 'standard' dissection, plain old 'you see what you think you should see' bias, some combo of all of the above...

And that comment is answered by:

Medicine needs more Masters and PhD students. I'm sure if they had as many students studying the body in extreme detail, like the eleventy billion English majors who write thesis/dissertations on say, Shakespeare, this would've been hammered out decades ago. XD

Which is interesting-- sometimes studying things in extreme detail "just because" (probably because the object of study has high status-- consider early observations of the planets) can pay off big.

Comment author: Vaniver 07 November 2013 05:39:20PM 1 point [-]

The "new ligament discovered" angle gets less impressive (to me, at least) when I read this part:

Their starting point: an 1879 article by a French surgeon that postulated the existence of an additional ligament located on the anterior of the human knee.

Comment author: gwern 07 November 2013 08:06:06PM 6 points [-]

I'm more impressed, actually, in terms of the unevenness of progress - it took ~134 years to confirm his postulate? It's not like corpses were unavailable for dissection in 1879.

Comment author: Douglas_Knight 09 November 2013 03:25:47PM 1 point [-]

It inspires more awe at our collective failures, but suggests that we should not be so impressed with the new people as if they had a method that would make us sure that we hadn't missed even more ligaments.

Comment author: Manfred 08 November 2013 04:48:18AM *  6 points [-]

The media giveth sensationalism, and the media taketh away.

reddit - "So that "new" ligament? Here's a study from 2011 that shows the same thing. It's not even close to a new development and has been seen many times over the past 100 years." Summary quote: "The significance of the Belgian paper was to link [the ligament's] functionality to what they called "pivot shift", and knee reinjuries after ACL surgery. The significance of this paper, I believe, is that in the near future surgeons performing these operations will have an additional ligament to inspect and possibly repair during ACL surgery, which will hopefully reduce recurrence rates, and likely the rates of developing osteoarthritis in the injured knee down the line."

Comment author: NancyLebovitz 08 November 2013 12:23:37PM 0 points [-]

sigh

Comment author: ChrisHallquist 04 November 2013 01:45:01AM 14 points [-]

Can someone explain nanotech enthusiasm to me? Like, I get that nanotech is one of the sci-fi technologies that's actually physics-compliant, and furthermore it should be possible because biology.

But I get the impression that among transhumanist types slightly older than me, there's a widespread expectation that it will lead to absolutely magical things on the scale of decades, and I don't get where that comes from, even after picking up Engines of Creation.

I'm thinking of, e.g. Eliezer talking about how he wanted to design nanotechnology before he got into AI, or how he casually mentions nanotechnology as being one of the big ways a super-intelligent AI could take over the world. I always feel totally mystified when I come across something like that, like it's a major gulf between me and slightly older nerds.

Comment author: Armok_GoB 04 November 2013 02:43:22AM 6 points [-]

Trying for minimal technicalities: There are at least 3 different technologies with not much surface-level usage similarities referred to as "nanotech".

Assemblers: basically 3d printers, but way more flexible and able to make things like food, robots, or more assemblers.

Materials: diamondoids, buckytubes, circuitry. We already have some of these really, it's just that we'd get more kinds of them, and they'd be really cheap to make with a nanotech assembler. Stronger, faster, more powerful versions of what modern tech already can do.

Nanobots, particularly medical: Basically can do all the things living cells can do, but better, and also being able to do most of the things machines can do, and commandable in exact detail. There are also a number of different ways they'd grant immortality enough that they are almost sure to do so even if most of them end up not working out.

Now you can ask questions about each one of these in order, with more specifics.

Comment author: ChrisHallquist 04 November 2013 03:09:56AM 1 point [-]

The question is about all these technologies - though it's about 2 mainly insofar as 2 is an extension of 1.

So the question is why expect any of these technologies to mature on a timescale of decades?

(Or, assuming FOOM, why assume they'd be relatively low-hanging fruit for a FOOMing AI, such that "trick humans into building me nano assemblers" is a prime strategy for a boxed AI to escape?)

Comment author: Armok_GoB 04 November 2013 05:52:21AM 3 points [-]

As I said, 2 is already here, and it's becoming more here gradually.

For 3, we have a proof of concept to rip of: biological cells. Those also happens to have a specialized assembler in them already; the ribosome. And we can print instructions for it already. There's only 1 problem left and that's the protein folding problem. The protein folding problem is somewhat rapidly made progress on software wise, and even if that were to fail it won't be all that long before we ca simply brute force it with computing power. Now, the other kinds of nanobots are less clear.

The assembler (1) is trickier; however, Drexler already sorta made a blueprint for one I think, and 3 will help a great deal with it as well.

For the fooming, it's the 3 one, and ways to use it. As I said we already have the hardware, and things like the protein folding problem is exactly what an AI would be great at. Once it's solved that it has full control over biology and can essentially make The Thing and/or a literal mind control virus, and take over that way.

Comment author: DanielLC 05 November 2013 05:00:53AM 2 points [-]

I'm not sure protein folding can be brute forced without quantum computers. There's too many ways for it to fold. In real life, I'm pretty sure quantum tunneling gets involved. Simulations have worked, but I there's a limit to that.

Comment author: ChrisHallquist 04 November 2013 04:19:03PM 1 point [-]

There's only 1 problem left and that's the protein folding problem. The protein folding problem is somewhat rapidly made progress on software wise, and even if that were to fail it won't be all that long before we ca simply brute force it with computing power.

Okay, so one sub-piece of puzzlement I have is why talk of protein folding as a problem that is either solved or unsolved - as if we (or more frighteningly, an AI) could suddenly go from barely being able to do it to 100% capable.

I was also under the impression that protein folding was mathematically horrible in a way that makes it unlikely to be brute forced any time soon, though I just now realized that I may have been thinking of the general problem of predicting chemistry from physics, maybe protein folding is much easier.

Comment author: Douglas_Knight 04 November 2013 10:55:41PM 7 points [-]

Predicting chemistry from physics should be easy with a quantum computer, but appears hard with a classical computer. Often people say that even once you make a classical approximation, ie, assume that the dynamics are easy on a classical computer, that the problem of finding the minimum energy state of a protein is NP-hard. That's true, but a red herring, since the protein isn't magically going to know the minimum energy state. Though it's still possible that there's some catalyst to push it into the right state, so simulating the dynamics in a vacuum won't get you the right answer (cf prions). Anyhow, there's some hope that evolution has found a good toolbox for designing proteins and that if can figure out the abstractions that evolution is using, it will all become easy. In particular, there are building blocks like the alpha helix. Certainly an engineer, whether evolution or us, doesn't need to understand every protein, just know how to make enough.

I think the possibility that a sufficiently smart AI would quickly find an adequate toolbox for designing proteins is quite plausible. I don't know what Eliezer means, but the possibility seems to me adequate for his arguments.

Comment author: Eliezer_Yudkowsky 05 November 2013 07:55:01AM 5 points [-]

Try Nanosystems perhaps.

Comment author: Cyan 08 November 2013 02:32:00AM *  3 points [-]

An analogy might help give a sense of scale here. This isn't an argument, but it hints at the scope of the unknown unknowns in nanotech space. Here on our macroscopic scale, some wonders wrought by evolution include the smasher mantis shrimp's kinetic attack, a bee hive's eusocial organization, the peregrine falcon's flight speed, and the eagle's visual system. But evolution is literally mindless -- by actually knowing how to do things, human engineering created electromagnetic railguns, networks of international trade, the SR-71 Blackbird, and the Hubble telescope. Now apply that kind of thinking to "because biology" on the nano scale...

Comment author: mwengler 07 November 2013 10:02:32PM 2 points [-]

Consider a machine as smart as a cellphone but the size of a blood cell.

In a sense, a protein or a drug is a smart molecule. It keys in to a very limited number of things and ignores the rest. There are many different smart proteins or smart molecules to be used as drugs with many different purposes. Even so, chemotherapy, for example, is primarily about ALMOST killing everything while differentially being a bit more toxic to the cancer cells.

Now increase the intelligence of ths smartest molecule by 10 fold, 100 fold, 1000 fold. Perhaps you give it the ability for a simple 2-way communication to the outside world. If its intelligence is increased, there should be MANY ways to allow it to distinguish a tumor, a micro-tumor, a cancerous cell, from all the good things in your body. All the sudden, the differential toxicity of "chemo" therapy (now nanotherapy) will be 10, 100X as high as it is for smart molecules.

Now consider these smart little machines doing surgery. Inoperable tumor? Not inoperable for a host of machines the size of bloodcells that will literally be able to operate on the most remote of tumors from inside of them.

Tendency towards obesity? How hard will it be to have a system of nanites that screw with your metabolism in such a way to eliminate all the stored fat in cells until told that, or until they measure that, we are down to a good level.

These are just a few stories from medicine. I expect anybody who does not wish to get sick and die would be enthusiastic about these, but YMMV.

Comment author: Lumifer 04 November 2013 04:30:36AM 1 point [-]

and I don't get where that comes from, even after picking up Engines of Creation.

Probably comes from Neal Stephenson's The Diamond Age: Or, A Young Lady's Illustrated Primer :-)

Comment author: CellBioGuy 05 November 2013 05:16:56AM *  9 points [-]

I definitely have found that this forum is NOT immune to fictional evidence.

Comment author: Douglas_Knight 05 November 2013 07:55:17PM 2 points [-]

I'm pretty sure that the people Chris is talking about are Stephenson's source, not vice versa.

Comment author: passive_fist 22 November 2013 07:22:10PM *  0 points [-]

Perhaps the reason is that the ideas we're used to nowadays - like reconfiguring matter to make dirt and water into food or repair microcellular damage (for example, to selectively destroy cancer tumors) - were absolutely radical and totally unheard of when they were first proposed. As far as I know, Feynman was the first to seriously suggest that such a thing was possible, and most reactions to him at the time were basically either confusion, disbelief, or dismissal. Consider the average technologist in 1950. Hand-wound computer memories were state of the art, no one knew what DNA looked like, famines seemed a natural part of the order of things, and as far as everyone knew, the only major technological difference between the present and the future was maybe going to be space travel. Now someone comes along and tells you that there could be this new technology that allows you to store the library of congress in the head of a pin and carry out any chemical reaction just by writing down the formula - including the chemical reactions of life. The consequences would be, for instance, the ability to feed everyone on the planet at basically free. To you, such a technology would seem "Indistinguishable from magic." Would it be a dramatic inferential step to then say that it could do stuff that literally is magic?

Nanotechnology never promised magic, of course. All it promises is the ability to rearrange atoms into a subset of those structures allowed by physics (a subset that is far larger than our current technology can do, but a subset nonetheless). It promised nothing more, nothing less. This is in itself dramatic enough, and it would allow all sorts of things that we probably couldn't imagine today.

Comment author: Omid 06 November 2013 04:31:52PM 4 points [-]

Has anyone else had this happen to them?

  • You got into an argument with a coworker (or someone else you see regularly). You had a bitter falling out.
  • You were required to be around them again (maybe due to work, or whatever). You make awkward small-talk but it's still clear you hate each other.
  • You continue to make awkward small talk anyway, pretending that it doesn't make you uncomfortable.
  • Your enemy reciprocates. The two of you begin to climb the intimate conversations ladder.
  • Both of you act like friends. But, at least from your end, it's not clear if you really are friends. Neither one of you has apologized, nor have you agreed to disagree, or really made any commitment to end hostility. You have no idea whether your enemy has moved on from your fight, and is ready to resume friendship; or if they're simply carrying on a charade of friendship like you.
  • Conversations with this person become really awkward, as you're not sure whether to engage the "enemy-with-whom-I-treat-like-a-friend-just-to-act-civilized" protocol or the "real friend" protocol.

Any advice? Am I the only one that's experienced this?

Comment author: niceguyanon 06 November 2013 10:18:06PM *  1 point [-]

It looks like you have an unspoken treaty of non-hostility. People don't just forget those kind of things; you didn't. My advice is to make good with the person and acknowledge your prior differences, it will be less awkward going forward and you would gain his/her respect. And who knows, they might even gain your respect. Friends for the most part are always better than enemies.

Comment author: TheOtherDave 06 November 2013 05:14:58PM 1 point [-]

I've experienced variation on the theme.
My usual approach is to decide whether I value treating them as an enemy for some reason. If I do, then I continue to do so (which can include pretending to treat them like a friend, depending on the situation). If I don't, then I move on. Whether they've actually moved on or not is their problem.

Comment author: ChristianKl 08 November 2013 04:57:52PM *  0 points [-]

I generally don't think it makes much sense to label other people as enemies.

Comment author: bramflakes 02 November 2013 07:22:56PM 4 points [-]

How do I decrease my time-preference?

Comment author: [deleted] 03 November 2013 01:01:45AM 11 points [-]

Read about hyperbolic discounting, if you haven't already.

Assuming a conflict between short- and long-term decisions, the general advice is to mentally bundle a given short-term decision with all similar decisions that will occur in the future. For example, you might think of an unhealthy snack tonight as "representing" the decision to eat an unhealthy snack every night.

Comment author: hyporational 03 November 2013 05:55:39AM *  6 points [-]

Optimize your environment for decreased time-preference when you have the most control:

Fill your refrigerator when you're not hungry. Apply effortful-to-dismantle restrictions on your computer when you're not bored and tired. Walk to the university library to study so it takes effort to come back home to your hobbies.

I'd like to read and collect other similar strategies for my toolbox.

ETA: I just realized I do this for exercise too. There's a lake near my house with a circumference of about 6 kilometers, and I go jogging around it frequently. I have a strong desire to quit once I've gotten to the other side, but I have no choice but to run the whole route at that point. Sometimes I decide to walk the other half, but I guess it's better than nothing. Another option would be to just run in one direction and then back, but I find the idea too boring, even if I change my route a bit.

Comment author: Metus 03 November 2013 12:13:52AM *  8 points [-]

Am I the only one who is bothered that these threads don't start on Monday anymore?

Posting a request from a past open thread again: Does anyone have a table of probabilities for major (negative) life events, like divorse or being in a car accident? I ask this to have a priority list of events to be prepared for, either physically or mentally.

Comment author: hyporational 03 November 2013 07:43:22AM *  3 points [-]

The lifetime risk of developing cancer is 44 % in males and 38 % in females. The lifetime risk of dying from cancer is 23 % in males and 19 % in females. It's worth mentioning that the methods for gathering medical mortality statistics are pretty biased, if not completely bonkers.

Comment author: Lumifer 04 November 2013 04:09:20AM 1 point [-]

methods for gathering medical mortality statistics are pretty biased, if not completely bonkers.

Would you be willing to expand on this?

Comment author: hyporational 04 November 2013 05:32:23AM *  9 points [-]

ETA: Apparently a new WHO recommendation for filling death certificates was introduced in 2005-2006 and this caused a significant drop in pneumonia mortality in Finland.

I'm not entirely sure if it works this way in the whole EU, but it probably does. It's more complicated than what I explain below, but it's the big picture that matters.

The most common way to record mortality statistics is that the doctor who was treating the patient fills a death certificate. There are three types of causes of death that can be recorded in a death certificate. There are immediate causes of death and there are underlying causes of death. There are also intermediate causes of death, but nobody really cares about those because recording them is optional. The statistics department in Finland is interested in recording only the underlying causes of death and that's what gets published as mortality statistics. Only one cause of death per patient gets recorded.

If someone with advanced cancer gets pneumonia and dies, a doctor fills the death certificate saying that the underlying cause of death was cancer and the immediate cause of death was pneumonia. Cancer gets recorded as the one and only cause of death by the statistics department. Depending on the patient, possible underlying causes of death could also be alcoholism, coronary heart disease or alzheimer's disease or whatever is accepted by a department that checks these certificates.

The doctor's opinion of whether it was the pneumonia or the chronic disease that killed the patient doesn't really matter. If he also fills the underlying cause of death as pneumonia, he gets a scolding letter and has to fill it again until he gets it right.

What if the patient has several chronic diseases that could have been underlying causes of death? Well, you only get to pick one, and only that one gets recorded as the cause of death. You can list the other diseases too as contributory causes of death, but this doesn't really effect any statistics. I guess it would be less biased to flip a coin or something, but I think most doctors just pick something fitting.

A colleague of mine once tried to record pneumonia as the underlying cause of death, the patient was an alcoholic (not sure how bad it was). He got a letter saying he should fix the certificate and that people in developed countries don't die of pneumonia anymore. Wonder why that is...

Comment author: gsgs 05 November 2013 03:21:38PM 3 points [-]

in USA they can fill in 20 secondary causes on the death certificates and all the anonymized death certificates since 1959 are available online from NCHS in computer-readable form to check/search for conditions. Irregularities usually appear when there is a switch from one ICD-Code to a new one, so in 1969,1979,1999. Other irregularities are often checked, compared with other states,countries,conditions and the reason discovered

Comment author: hyporational 05 November 2013 04:35:30PM *  2 points [-]

What if the patient has several chronic diseases that could have been underlying causes of death? Well, you only get to pick one, and only that one gets recorded as the cause of death. You can list the other diseases too, but not as causes of death.

It seems I miscommunicated here. What I meant to say that listing these other diseases has no meaningful impact on the mortality statistics, although technically speaking they are causes of death. If the point is to gather accurate statistics, listing them feels like a consolation prize, because statisticians don't seem to be interested in them.

In Finland a direct translation for these would be "contributory causes of death". That's probably the same thing as secondary causes of death. The problem is, it's difficult for someone who makes these into statistics to know how important they were. Almost anything the patient has can be listed as a contributory cause of death.

Even a bigger problem is that listing them is completely optional. If almost nobody fills them in properly (because they usually have better things to do), that is another good reason for a statistician not to use them.

Is filling in the secondary causes mandatory in US? Are there clear restrictions for what can be listed? If not, I'm not sure if they provide all that useful information, statistically speaking. Are they really used in meaningful way in any statistics?

Irregularities usually appear when there is a switch from one ICD-Code to a new one, so in 1969,1979,1999.

I suppose WHO recommendations for filling these certificates impact the US too.

Comment author: Lumifer 04 November 2013 05:07:41PM 2 points [-]

Very interesting, thank you.

I have a pet interest -- carefully looking at how standard, universally-accepted, real-life, empirical data is collected and produced and whether it actually represents what everyone blindly assumes it does. In the field of economics, for example, closely examining how, say, the GDP or the inflation numbers are calculated is... illuminating.

Comment author: NancyLebovitz 04 November 2013 09:53:42PM 2 points [-]

closely examining how, say, the GDP or the inflation numbers are calculated is... illuminating.

Details?

Comment author: Lumifer 05 November 2013 12:12:50AM 1 point [-]

The problem is that the problems aren't summarizeable in a neat half a page list. And it's not like the calculations are wrong, rather they are right under a certain set of assumptions and boundary conditions -- and the issue is that people forget about these assumptions and conditions and just assume they're right unconditionally.

For an introduction take a look at e.g. Shadowstats. I don't necessarily agree with everything there, but it's a useful starting point.

Comment author: NancyLebovitz 05 November 2013 12:58:52AM 0 points [-]

Thanks.

I twitch when changes in GDP are reported to a tenth of a percent-- it seems to me that it couldn't be measured with such precision. Do you think I'm being reasonable?

Comment author: ahbwramc 07 November 2013 05:02:27PM 3 points [-]

My own (uninformed) intuition is that GDP changes would be much more accurate than absolute GDP values, just because systematic errors could largely cancel out.

Comment author: hyporational 06 November 2013 03:56:53AM 3 points [-]

Was some change made in the lw code in the past couple of weeks or so? I can't browse this site with my android smartphone anymore, have tried several browsers. The site either frequently freezes the browser or shows a blank page after the page has finished loading. This happens more with bigger threads.

Anyone else having this problem?

Comment author: Douglas_Knight 06 November 2013 05:45:34PM 1 point [-]

I have the same problem for pages like recent posts, which look OK at first, but then become blank. Article pages are more likely to load correctly. Solution: turn off javascript. (Android 2.2)

Comment author: hyporational 07 November 2013 02:03:07AM *  0 points [-]

Thanks. This obviously disables a lot of funtionality. Another fix I found for the blank page problem is simply interrupt the loading of the page once you start seeing stuff.

Comment author: lukeprog 04 November 2013 05:39:45PM 9 points [-]

Brian Leiter shared an amusing quip from Alex Rosenberg:

So, the... Nobel Prize for “economic science” gets awarded to a guy who says markets are efficient and there are no bubbles—Eugene Fama (“I don’t know what a credit bubble means. I don’t even know what a bubble means. These words have become popular. I don’t think they have any meaning”—New Yorker, 2010), along with another economist—Robert Shiller, who says that markets are pretty much nothing but bubbles, “Most of the action in the aggregate stock market is bubbles.” (NY Times, October 19, 2013) Imagine the parallel in physics or chemistry or biology—the prize is split between Einstein and Bohr for their disagreement about whether quantum mechanics is complete, or Pauling and Crick for their dispute about whether the gene is a double helix or a triple, or between Gould and Dawkins for their rejection of one another’s views about the units of selection. In these disciplines Nobel Prizes are given to reward a scientist who has established something every one else can bank on. In economics, “Not so much.” This wasn’t the first time they gave the award to an economist who says one thing and another one who asserts its direct denial. Cf. Myrdal and Hayek in 1974. What’s really going on here? Well, Shiller gave the game away in a NY Times interview when he said of Fama, “It’s like having a friend who is a devout believer of another religion.” Actually it’s probably two denominations in the same religion.

Comment author: badger 04 November 2013 09:33:19PM 14 points [-]

Ugh. The prize was first and foremost in recognition of Fama, Shiller, and Hansen's empiricism in finance. In the sixties, Fama proposed a model of efficient markets, and it held up to testing. Later, both Fama, Shiller, and Hansen showed further tests didn't hold up. Their mutual conclusion: the efficient market hypothesis is mostly right, and while there is no short-term predictability based on publicly available information, there is some long-term predictability. Since the result is fairly messy, Fama and Shiller have differences about what they emphasize (and are both over-rhetorical in their emphasis). Does "mostly right" mean false or basically true?

What's causing the remaining lack of agreement, especially over bubbles? Lack of data. Shiller thinks bubbles exist, but are rare enough he can't solidly establish them, while Fama is unconvinced. Fama and Shiller have done path-breaking scientific work, even if the story about asset price fluctuation isn't 100% settled.

Comment author: mwengler 07 November 2013 09:54:02PM 0 points [-]

Does "mostly right" mean false or basically true?

Mostly right means false. The hypothesis that securities markets are pretty darn efficient, and everybody goes through a broad range of ideas of inefficiencies that turn out not to be "real" (or exploitable) is, I think, virtually uncontested by anyone. Including uncontested by people who think there was a tech bubble in the late 1990s and a housing bubble in the mid 00's.

I hear Fama interviewed after he got the prize. He denies that the internet bubble and the housing bubble were bubbles, in the sense that they were knowable enough to be acted upon. In particular, he claims that anybody who detects the internet bubble and/or the housing bubble will also detect a bunch of non-bubbles such that any action they take to make money off their knowledge of the real bubbles will be (at least) completely negated by what they lose when they are exploiting unreal bubbles.

Efficient Market Hypothesis denies knowable bubbles, at least according to Fama interviewed within the last month.

Comment author: [deleted] 03 November 2013 06:39:45PM *  9 points [-]

SPOILERS FOR "FRIENDSHIP IS OPTIMAL"

Why is 'Friendship is optimal' "dark" and "creepy"? I've read many people refer it that way. Only things that are clearly bad are the killings of all the other lifeforms, but otherwise this is scenario is one of the best that humanity could come across. It's not perfect, but it's good enough and much better than the world we have today. I'm not sure if it's realistic to ask for more. Considering how likely it is that humanity will end in some incredibly fucked up way full of suffering, then I would definitely defend this kind of utopia.

Comment author: Leonhart 03 November 2013 09:29:27PM *  11 points [-]

(Comment cosmetically edited in response to Kaj_Sotala, and again to replace a chunk of text that fell in a hole somewhere)

OK, I'll have a go (will be incomplete).

People in general will find the Optimalverse unpleasant for a lot of reasons I'll ignore; major changes to status quo, perceived incompatibility with non-reductionist worldviews, believing that a utopia is necessarily unpleasant or Omelas-like (a variant of this fallacy?), and lots of even messier things.

People on LessWrong may be thinking about portions of the Fun Theory Sequence that the Optimalverse conflicts with, and in some cases they may think that these conflicts destroy all of the value of the future, hence horror.

(rot13 some bits that might consitute spoilers)

  • Humans want things to go well, but they also want things to have been able to go badly, such that they made the difference. Relevant: Living By Your Own Strength, Free to Optimize.

  • The existence of a superintelligence makes human involvement superfluous, and humans do not want this to happen. Relevant: Amputation of Destiny.

  • Gur snpg gung gur NV vf pbafgenvarq gb fngvfsl uhzna inyhrf gur cbal jnl zrnaf gung n uhtr nzbhag bs cbffvoyr uhzna rkcrevrapr vf abj vzcbffvoyr gb rire ernyvfr. Eryrinag: Hzz... znlor Value is Fragile? Abg dhvgr. Uryc zr bhg urer, thlf! (nyfb, vafreg lbhe bja cersreerq snaqbz wbxr nobhg cbbe Ylen arire trggvat gb unir unaqf rgp.)

  • Nf lbh zragvbarq, gur jnl va juvpu gur NV'f cnegvphyne qrsvavgvba bs "uhzna" jnf abg evtug naq pna arire or zbqvsvrq, urapr nyvra ncbpnylcfrf. Eryrinag: The Hidden Complexity of Wishes

Themes that are more explicit after the extra worldbuilding in Caelum est Conterrens:

  • Zbqvslvat uhzna zvaqf va gur jnl gur hcybnqf ner qrfpevorq nf orvat zbqvsvrq vf ernyyl, ernyyl, ernyyl, ernyyl uneq, naq zvtug or vzcbffvoyr jvgubhg oernxvat crefbany pbagvahvgl Growing Up is Hard. (Guvf vf zber bs n ubeebe fbhepr guna na nethzrag, orpnhfr gur fgbel pna or ernq nf fgvchyngvat gung gur NV vf trggvat vg evtug).

Gjb cbffvoyr svany nggenpgbef sbe uhzna tebjgu ner cerfragrq (Ybbc naq Enl Vzzbegnyf):

  1. ybbcvat raqyrffyl jvgu zrzbel biresybj (gung vf, va gur raq nyy yvirf snvy gur pbaqvgvbaf va Emotional Involvement ol orpbzvat n qvfpbaarpgrq frevrf bs rcvfbqrf)
  2. qrcnegvat sebz gur uhznar inyhr senzrjbex, ("bhgtebjvat ybir")
    Fbzr urer ner abg fngvfsvrq jvgu rvgure naq ernyyl, ernyyl ubcr gurer vf n guveq jnl sbe uhznaf gb npuvrir haobhaqrq tebjgu gung erznvaf zrnavatshy (ol gurve yvtugf).

Notes:

  • I'm sympathetic to your position; this is the substance of my comment here that I think I understand what's supposed to horrify me.

  • That comment of mine is no doubt wrong; there will be things that don't horrify me that I didn't even realise were supposed to.

  • There are quick and obvious comebacks to nearly all the above points. In a lot of cases, those quick comebacks are dealt with in the linked articles. Read the Fun Theory Sequence; it's my favorite sequence, despite the fact that I disagree with more of it than any of the others.

Comment author: [deleted] 04 November 2013 02:58:04PM *  3 points [-]

Now that I've thought about your post I realized that the biggest question in this story is what the phrase "satisfy values" actually means. Currently it's a pretty big hand wave in the story. Especially your first point seems to imply that we understood it a bit differently.

In my understanding, if I value real challenge, the possibility of things going badly, or even some level of pain, then the Optimalverse will somehow maximize those values and at least provide the feeling of real challenge and possibility of things going badly. And I don't know why the Optimalverse couldn't even provide the real thing. The way Light Sparks tries to pass the Intermediate Magic test seems an awfully lot like real challenge. Of course the Optimalverse wouldn't allow you to die because in most cases the dislike of death overrides the longing for real challenge in the value system, but that still leaves a lot of options free. I got the impression that this is how it's actually handled in the story. There's this passage

Cbavrf unq ab cerqngbef; orvat ‘rngra’ ol n zbafgre va gur Rireserr sberfg whfg raqrq jvgu gur cbal va gur ubfcvgny va dhvgr n ovg bs cnva. Fngvfslvat inyhrf jnfa’g whfg nobhg unccvarff; univat zbafgref yrg cbavrf grfg gurve fgeratgu be oenirel. Rneyl ba, evtug nsgre gur pbairefvba bs Rnegu, n zrer sbhe uhaqerq cbavrf unq crgvgvbarq Cevaprff Pryrfgvn gb yrg gurz qvr, naq Cevaprff Pryrfgvn unq bayl nterrq gung qbvat fb jbhyq fngvfsl gurve inyhrf va rvtugl-fvk pnfrf. Abcbal unq qvrq va frireny Rdhrfgevna fhowrpgvir zvyyraavn.

Your second point is of course a real concern for some people, but personally it doesn't feel very relevant. My actions don't currently feel very important in the big scheme of things and I don't know how a superintelligence would change things all that much. If I'm not personally doing anything important, then it doesn't really matter to me if the important things are done by other humans or by a superintelligence. Anyway, this will always be a problem with AGI and if the AGI is friendly then the benefits outweight the negatives IMO. I think the alternative is worse.

The way I understood it is that the "ponies" in this story are essentially human in a pony disguise with four legs (two of them which can almost work like hands). A paragraph from the story:

V zbqvsvrq lbhe zbgbe pbegrk fb lbh pbhyq qrny jvgu lbhe arjsbhaq dhnqehcrqny zbirzrag, nybat jvgu bgure qvssreraprf orgjrra n uhzna naq cbal obql. V unir znqr gur zvavzny frg bs cbffvoyr punatrf; lbhe crefbanyvgl vf hapunatrq.

A big part of being human is due to our mind and hormones. Walking with two legs or being able to use hands extensively are more trivial points. If the psychology of a person doesn't change in the transition from human to pony, then this eliminates most of the problems in your third point.

I haven't read Caelum Est Conterrens and can't fully comment on those points. But it seems that those are more like technicalities. I don't know if it's actually possible to turn a person into a pony without losing the person in the process. But if you're not changing the brain parameters and the psychology doesn't change in the process like it seems to be in this story then I would be inclined to say it's possible. Clearly it can't be worse for your identity than losing all your limbs or becoming a quadriplegic? Anyway, one of the axioms in this story seems to be that it's possible.

I actually read the Fun theory sequence in its entirety before I read 'Friendship is optimal' and I thought FIO more faithful to the spirit of the sequence than 99% utopian stories out there. This is mostly because Celestia maximizes people's values, not their happiness. This is a very vague concept, and a lot depends on how it's implemented, but if it's implemented the way I picture it, there shouldn't be problems with things mentioned in High Challenge, Complex Novelty, Sensual Experience, Living By Your Own Strength, Free to Optimize, In Praise of Boredom, Interpersonal Entanglement and so on.

Of course, I have problems with applying things I read about to all my experiences, so it could be I misremember some things in the sequence or didn't understand them correctly to begin with.

Comment author: TheOtherDave 04 November 2013 03:12:29PM 1 point [-]

Clearly it can't be worse for your identity than losing all your limbs or becoming a quadriplegic?

Well, this is not clear, though it might be true.

I have frequently had the experience of not doing anything with my left leg; losing the ability to ever do anything with my left leg means I'm prevented from ever doing anything with it. This is horrible, of course, but it's the horror of being prevented from doing things I often choose not to do. Losing all my limbs is a more extreme version of the same thing.

Having different limbs might be more identity-distorting, by virtue of providing experiences that are completely unfamiliar.

Then again it might not.

For my own part, I'm not all that attached to preserving my current identity, so I'm not sure the question matters to me. If my choice is between an identity-altering pony body, and an identity-preserving quadriplegic body, I might well choose the former.

Comment author: Eliezer_Yudkowsky 05 November 2013 07:57:08AM 2 points [-]

Endorsed as a good summary.

Comment author: Kaj_Sotala 04 November 2013 05:35:27PM *  3 points [-]

Upvoted, but I'd like to request that you'd ROT13 either everything or nothing past a certain point. Being unable to just select all of it to be deciphered, and having to instead pick out a few pieces at a time, was mildly annoying.

Comment author: Leonhart 04 November 2013 08:51:19PM 2 points [-]

Done, thanks for saying. I was trying to avoid thinking about the interaction between rot13 and links (leaving the anchor text un-rot13ed seems like acceptable practice?) but I should just have spent the extra two minutes.

Comment author: Kaj_Sotala 05 November 2013 04:42:21AM *  1 point [-]

Thanks! Much better now. :-) (As for the links, one can just paint over them as well and think "oh it was just some link" when they show up as garbled in the translation.)

Comment author: [deleted] 04 November 2013 10:25:44PM *  1 point [-]

I read Caelum Est Conterrens, now I can better understand why some aspects of the scenario are a bit disconcerting if not horrifying. I find all the options loop immortality, ray immortality and exponential immortality kinda unpleasant, but maybe that is as good as it gets. Still, it feels like many of those things are not exclusive to this scenario, but are part of the world anyway.

Related to this, what did you think about the "normal" ending in the Three worlds collide?

Comment author: Leonhart 04 November 2013 11:26:51PM 1 point [-]

From flaky memory, I think I find the Normal Ending far less acceptable than anything in the Optimalverse - one feels the premature truncation of human nature, rather than the natural exhaustion of it (or the choice to become inexhaustible) - but hey, maybe I'm inconsistent.

Comment author: gattsuru 05 November 2013 12:47:16AM *  3 points [-]

At least to me, it's increasingly difficult to distinguish between a paradise machine and wireheading, and I dislike wireheading. Each shard of the Equestria Online simulation is built to be as fulfilling (of values through ponies and friendship) as possible, for the individual placed within that shard.

That sounds great! .... what happens when you're wrong?

I mean, look at our everyman character, David. He's set up in a shard of his own, with one hundred and thirty two artificial beings perfectly formatted to fit his every desire and want, and with just enough variation and challenge to keep from being bored. It's not real variation, or real challenge, but he'd not experience that in the real world, either, so it's a moot point. But look at the world he values. His challenges are the stuff of sophmore programming problems. His interpersonal relationships include a score counter for how many orgasms he gives or receives.

Oh, his lover is sentient and real, if that helps, but look at that relationship in specific. Butterscotch is created as just that little bit less intelligent than David is -- whether this is because David enjoys teaching, or because he's wrapped around the idea of women being less powerful than he is, or both, is up to the reader. Sculpted in her memories to exactly fit David's desires, and even a few memories that David has of her she never experiences, so that the real Butterscotch wouldn't have to have experienced unpleasant things that CelestAI used to manipulate David into liking/protecting her.

There are, to a weak approximation, somewhere between five hundred billion and one trillion artificial beings in the simulation, by the time most of humanity uploads. That number will only scale up over time. Let's ignore, for now, the creepiness in creating artificial sentients who value being people that make your life better. We're making artificial optimized for enjoying slaking your desires, which I would be surprised if it happened to also be optimized for what we as society would really like.

Lars is even worse: he is actively made to not not want his life of debauchery -- see the obvious overlap with the guy modifying himself to not get bored with a million years of catgirl sex.

At a deeper level, what if your own values are wrong?

The basic example, brought up in the Rules of The Universe document, is a violent psychopath. Upon being uploaded, CelestAI would quite happily set our psychopath up in a private shard with one hundred and fifty artificial ponies, all of which are perfectly molded to value being shot, stabbed, lit on fire, and violated in a way that is as satisfying as possible to a Dexter villain.

Or I can provide a personal example. I can go both ways, preferring guys, and was an unusually late bloomer. I can look back through time to see an earlier version of myself's values, and remember how they changed. Even in a fairly tolerant society and even with a very collaborative environment, this was not something that came according to my values or without external stimulus. ((There is a political position version of this, but for the sake of brevity I'll just mention that it's possible. More worryingly, I'm not sure there's a way to formalize this concern, as much as it hits me at a gut level. For the most part, value drift is something we don't want.))

Or, for an in-story example :

Ybbx ng jung unccraf gb Unaan / 'Cevaprff Yhan'. Gur guvat fur inyhrf zbfg, ng gur raq bs gur fgbel, vf oryvrivat gung fur qvq abg znxr n zvfgnxr hayrnfuvat PryrfgNV. Naq PryrfgNV vf dhvgr pncnoyr bs fubjvat ure whfg gur orfg rknzcyrf bs ubj guvatf ner orggre. Vg qbrfa'g znggre jung gur ernyvgl vf, naq vaqrrq gur nhgube gryyf hf gung Unaan pbhyq unir qbar orggre. Zrnajuvyr, Unaan vf xrcg whfg ba gur obeqre bs zvfrenoyr nf gur fgbel raqf.

It's a very good dysutopia -- I'd rather live there than here, and heck it even beats a good majority of conventional fluffy cloud heaven afterlives -- but it's still got a number of really creepy issues..

Comment author: Leonhart 05 November 2013 11:50:33PM *  9 points [-]

Let's ignore, for now, the creepiness in creating artificial sentients who value being people that make your life better.

No, let's not ignore it. Let's confront it, because I want a better explanation. Surely a person who values being a person that makes my life better, AND who is a person such that I will value making their life better, is absolutely the best kind of person for me to create (if I'm in a situation such that it's moral for me to create anyone at all).

I mean, seriously? Why would I want to mix any noise into this process?

Comment author: gattsuru 06 November 2013 04:02:33AM 2 points [-]

Good point. I've not uncompressed the thoughts behind that statement nearly enough.

Surely a person who values being a person that makes my life better, AND who is a person such that I will value making their life better, is absolutely the best kind of person for me to create (if I'm in a situation such that it's moral for me to create anyone at all).

The artificial sentients value being people that make your life better (through friendship and ponies). Your values don't necessarily change. And artificial sentients, unlike real ones, have no drive toward coherent or healthy spaces of design of minds : they do not need to have boredom, or sympathy, or dislike of pain. If your values are healthily formed, then that's great! If not, not so much. You can be a psychopath, and find yourself surrounded by people where "making their lives better" happens only because you like the action "cause them pain for arbitrary reasons". Or you could be a saint, and find yourself surrounded by people who value being healed, or who need to be protected, and what a coincidence that danger keeps happening. Or you can be a guardian, and enjoy teaching and protecting people, and find yourself creating people that are weak and in need of guidance. There are a lot of things you can value, and that we can make sentient minds value, that will make my skin crawl.

Now, the Optimalverse gets rid of some potential for abuse due to setting rules -- it's post-scarcity on labor, starvation or permanent injury are nonsense, CelestAI really really knows your mind so there's no chance of misguessing your values, so we can rule out a lot of incidental house elf abuse -- but it doesn't require you to be a good person. Nor does it require CelestAI to be. CelestAI cares about satisfying values through friendship and ponies, not about the quality of the values themselves. The machine does not and can not judge.

If it's moral to create a person and if you're a sufficiently moral person, then there's nothing wrong with artificial beings. My criticism isn't that CelestAI made a trillion sentient beings or a trillion trillion sentient beings -- there's nothing meaningfully worrying about that. The creepy factor is that CelestAI made one being, both less intelligence than possible and less intelligent than need be.

That may well be an unexamined reaction or even incorrect response. I like to think I'm open-minded, but I'm willing to recognize that I can overestimate it, and have done so in the past. There are real-world right-now folk who enjoy being (in specific contexts and while in control) hurt or being hurt and comforted, which I can accept. Maybe I'm being parochial when I judge David for wanting a woman he can always teach, or Lars for his sex groupies; that's not a mind space I empathize with terribly well, and a good deal of my revulsion comes from real-world constraints that wouldn't apply here. There's a reason that we're using the word creepy, rather than wrong. But it does make my skin crawl.

Comment author: Leonhart 07 November 2013 09:45:57PM 5 points [-]

Thank you for trying to explain.

You can be a psychopath, and find yourself surrounded by people where "making their lives better" happens only because you like the action "cause them pain for arbitrary reasons". Or you could be a saint, and find yourself surrounded by people who value being healed, or who need to be protected, and what a coincidence that danger keeps happening.

I'm curious about to what extent these intutions are symmetric. Say that the group of like-minded and mutually friendly extreme masochists existed first, and wanted to create their mutually preferred, mutually satisfying sadist. Do you still have a problem with that?

Or you can be a guardian, and enjoy teaching and protecting people, and find yourself creating people that are weak and in need of guidance.

The above sounds like a description of a "good parent", as commonly understood! To be consistent with this, do you think that parenting of babies as it currently exist is problematic and creepy, and should be banned once we have the capability to create grown-ups from scratch?
(Note that this being even possible depends on whether we can simulate someone's past without that simulation still counting as it having happened, which is nonobvious.)

The creepy factor is that CelestAI made one being, both less intelligence than possible and less intelligent than need be.

If David had wanted a symmetrically fulfilled partner slightly more intelligent than him, someone he could always learn from, I get the feeling you wouldn't find it as creepy. (Correct me if that's not so). But the situation is symmetrical. Why is it important who came first?

Comment author: gattsuru 12 November 2013 12:57:41AM *  1 point [-]

Thank you for the questions, and my apologies for the delayed response.

I'm curious about to what extent these intutions are symmetric. Say that the group of like-minded and mutually friendly extreme masochists existed first, and wanted to create their mutually preferred, mutually satisfying sadist. Do you still have a problem with that?

Yes, with the admission that there are specific attributes to masochism and sadism that are common but not universal to all possible relationships or even all sexual relationships with heavy differences in power dynamics(1). It's less negative in the immediate term, because one hundred and fifty masochists making a single sadist results in a maximum around forty million created beings instead of one trillion. In the long term, the equilibrium ends up pretty identical.

(1) For contrast, the structures in wanting to perform menial labor without recompense are different from those wanting other people to perform labor for you, even before you get to a post-scarcity society. Likewise, there are difference in how prostitution fantasies generally work versus how fantasies about hiring prostitutes do.

Or you can be a guardian, and enjoy teaching and protecting people, and find yourself creating people that are weak and in need of guidance. The above sounds like a description of a "good parent", as commonly understood!

I'm not predisposed toward child-raising, but from my understanding the point of "good parent" does not value making someone weak: it values making someone strong. It's the limitations of the tools that have forced us to deal with years of not being able to stand upright. Parents are generally judged negatively if their offspring are not able to operate our their own by certain points.

To be consistent with this, do you think that parenting of babies as it currently exist is problematic and creepy, and should be banned once we have the capability to create grown-ups from scratch?

If it were possible to simulate or otherwise avoid the joys of the terrible twos, I'd probably consider it more ethical. I don't know that I have the tools to properly evaluate the loss in values between the two actions, though. Once you've got eternity or even a couple reliable centuries, the damages of ten or twenty years bother me a lot less.

These sort of created beings aren't likely to be in that sort of ten or twenty year timeframe, though. At least according to the Caelum est Conterrens fic, the vast majority of immortals (artificial or uploaded) stay within a fairly limited set of experiences and values based on their initial valueset. You're not talking about someone being weak for a year or a decade or even a century: they'll be powerless forever.

I haven't thought on it enough to say that creating such beings should be banned (although my gut reaction favors doing so), but I do know it'd strike me as very creepy. If it were possible to significantly reduce or eliminate the number of negative development experiences entities undergo, I'd probably encourage it.

If David had wanted a symmetrically fulfilled partner slightly more intelligent than him, someone he could always learn from, I get the feeling you wouldn't find it as creepy. (Correct me if that's not so). But the situation is symmetrical. Why is it important who came first?

In that particular case, the equilibrium is less bounded. Butterscotch isn't able to become better than David or even to desire becoming better than David, and a number of pathways for David's desire to learn or teach can collapse such that Butterscotch would not be able to become better or desire becoming better than herself.

That's not really the case the other way around. Someone who wants a mentor that knows more than them has to have an unbounded future in the FiOverse, both for themselves and their mentor.

In the case of intelligence, that's not that bad. Real-world people tend toward a bounded curve on that, and there are reasons we prefer socializing within a relatively narrow bound downward. Other closed equilibria are more unpleasant. I don't have the right to say that Lars' fate is wrong -- it at least gets close to the catgirl volcano threshold -- but it's shallow enough to be concerning. This sort of thing isn't quite wireheading, but it's close enough to be hard to tell the precise difference.

More generally, some people -- quite probably all people -- are going to go into the future with hangups. Barring some really massive improvements in philosophy, we may not even know the exacts of those hangups. I'm really hesitant to have a Machine Overlord start zapping neurons to improve things without the permission of the owner's brains (yes, even recognizing that a sufficiently powerful AI will get the permission it wants).

As a result, that's going to privilege the values of already-extant entities in ways that I won't privilege creating new ones: some actions don't translate through time because of this. I'm hesitant to change David's (or, once already created, Butterscotch's) brain against the owner's will, but since we're already making Butterscotch's mind from scratch both the responsibilities and the ethical questions are different.

Me finding some versions creepier than others reflects my personal values, and at least some of those personal values reflect structures that won't exist in the FiOverse. It's not as harmful when David talks down to Butterscotch, because she really hasn't achieved everything he has (and the simulation even gives him easy tools to make sure he's only teaching her subjects she hasn't achieved yet), where part of why I find it creepy is because a lot of real-world people assume other folk are less knowledgeable than themselves without good evidence. Self-destructive cycles probably don't happen under CelestAI's watch. Lars and his groupies don't have to worry about unwanted pregnancy, or alcoholism, or anything like that, and at least some of my discomfort comes from those sort of things.

At the same time, I don't know that I want a universe that doesn't at least occasionally tempt up beyond or within our comfort zones.

Comment author: Leonhart 12 November 2013 09:17:23PM 2 points [-]

Sorry, I'm not following your first point. The relevant "specific attribute" that sadism and masochism seem to have in this context are that they specifically squick User:gattsuru. If you're trying to claim something else is objectively bad about them, you've not communicated.

I'm not predisposed toward child-raising, but from my understanding the point of "good parent" does not value making someone weak: it values making someone strong.

Yes, and my comparison stands; you specified a person who valued teaching and protecting people, not someone who valued having the experience of teaching and protecting people. Someone with the former desires isn't going to be happy if the people they're teaching don't get stronger. You seem to be envisaging some maximally perverse hybrid of preference-satisfaction and wireheading, where I don't actually value really truly teaching someone, but instead of cheaply feeding me delusions, someone's making actual minds for me to fail to teach!

the vast majority of immortals (artificial or uploaded) stay within a fairly limited set of experiences and values based on their initial valueset.

We are definitely working from very different assumptions here. "stay within a fairly limited set of experiences and values based on their initial valueset" describes, well, anything recognisable as a person. The alternative to that is not a magical being of perfect freedom; it's being the dude from Permutation City randomly preferring to carve table legs for a century.

In that particular case, the equilibrium is less bounded. Butterscotch isn't able to become better than David or even to desire becoming better than David, and a number of pathways for David's desire to learn or teach can collapse such that Butterscotch would not be able to become better or desire becoming better than herself.

I don't think that's what we're given in the story, though. If Butterscotch is made such that she desires self-improvement, then we know that David's desires cannot in fact collapse in such a way, because otherwise she would have been made differently. Agreed that it's a problem if the creator is less omniscient, though.

That's not really the case the other way around. Someone who wants a mentor that knows more than them has to have an unbounded future in the FiOverse, both for themselves and their mentor.

Butterscotch is that person. That is my point about symmetry.

I don't have the right to say that Lars' fate is wrong -- it at least gets close to the catgirl volcano threshold -- but it's shallow enough to be concerning. This sort of thing isn't quite wireheading, but it's close enough to be hard to tell the precise difference.

But then - what do you want to happen? Presumably you think it is possible for a Lars to actually exist. But from elsewhere in your comment, you don't want an outside optimiser to step in and make them less "shallow", and you seem dubious about even the ability to give consent. Would you deem it more authentic to simulate angst und bange unto the end of time?

Comment author: lmm 10 November 2013 03:27:57AM 0 points [-]

Say that the group of like-minded and mutually friendly extreme masochists existed first, and wanted to create their mutually preferred, mutually satisfying sadist. Do you still have a problem with that?

That seems less worrying, but I think the asymmetry is inherited from the behaviours themselves - masochism seems inherently creepy in a way that sadism isn't (fun fact: I'm typing this with fingers with bite marks on them. The recursion is interesting, and somewhat scary - usually if your own behaviour upsets or disgusts you then you want to eliminate. But it seems easy to imagine (in the FiOverse or similar) a masochist who would make themselves suffer more not because they enjoyed suffering but because they didn't enjoy suffering, in some sense. Like someone who makes themselves an addict because they enjoy being addicted (which would also seem very creepy to me))

To be consistent with this, do you think that parenting of babies as it currently exist is problematic and creepy, and should be banned once we have the capability to create grown-ups from scratch?

Yes. Though I wouldn't go around saying that for obvious political reasons. (Observation: people who enjoy roleplaying parent/child seem to be seen as perverts even by many BDSM types).

If David had wanted a symmetrically fulfilled partner slightly more intelligent than him, someone he could always learn from, I get the feeling you wouldn't find it as creepy. (Correct me if that's not so). But the situation is symmetrical. Why is it important who came first?

I think creating someone less intelligent than you is more creepy than creating someone more intelligent than you for the same reason that creating your willing slave is creepier than creating your willing master - unintelligence is maladaptive, perhaps even self-destructive.

Comment author: Leonhart 10 November 2013 07:43:34PM 3 points [-]

But it seems easy to imagine (in the FiOverse or similar) a masochist who would make themselves suffer more not because they enjoyed suffering but because they didn't enjoy suffering, in some sense.

Well, OK, but I'm not sure this is interesting. So a mind could maybe be built that was motivated by any given thing to do any other given thing, accompanied by any arbitrary sensation. It seems to me that the intuitive horror here is just appreciating all the terrible degrees of freedom, and once you've got over that, you can't generate interesting new horror by listing lots of particular things that you wouldn't like to fill those slots (pebble heaps! paperclips! pain!)

In any case, it doesn't seem a criticism of FiO, where we only see sufficiently humanlike minds getting created.

Like someone who makes themselves an addict because they enjoy being addicted (which would also seem very creepy to me))

Ah, but now you speak of love! :)

I take it you feel much the same regarding romance as you do parenting?

(Observation: people who enjoy roleplaying parent/child seem to be seen as perverts even by many BDSM types)

That seems to be a sacred-value reaction - over-regard for the beauty and rightness of parenting - rather than "parenting is creepy so you're double creepy for roleplaying it", as you would have it.

I think creating someone less intelligent than you is more creepy than creating someone more intelligent than you for the same reason that creating your willing slave is creepier than creating your willing master - unintelligence is maladaptive, perhaps even self-destructive.

Maladaptivity per se doesn't work as a criticism of FiO, because that's a managed universe where you can't self-destruct. In an unmanaged universe, sure, having a mentally disabled child is morally dubious (at least partly) because you won't always be there to look after it; as would be creating a house elf if there was any possibility that their only source of satisfaction could be automated away by washing robots.

But it seems like your real rejection is to do with any kind of unequal power relationship; which sounds nice, but it's not clear how any interesting social interaction ever happens in a universe of perfect equals. You at least need unequal knowledge of each other's internal states, or what's the point of even talking?

Comment author: NancyLebovitz 05 November 2013 01:05:49AM 4 points [-]

A little fiction on related topics: "Hell Is Forever" by Alfred Bester-- what if your dearest wish ts to create universes?You're given a pocket universe to live in forever, and that's when you find out that your subconscious keeps leaking into your creations (they're on the object level, not the natural law level), and you don't like your subconscious.

Saturn's Children by Charles Stross. The human race is gone. All that's left is robots, who were built to be imprinted on humans. The vast majority of robots are horrified at the idea of recreating humans.

Comment author: blacktrance 04 November 2013 06:33:54PM 1 point [-]

Having just finished reading "Friendship is Optimal" literally less than 10 minutes ago, I didn't find it dark or creepy at all. There are certain aspects of it that are suboptimal (being ponies, not wireheading), but other than that, it sounds like a great world.

Comment author: [deleted] 04 November 2013 06:49:12PM *  2 points [-]

There are certain aspects of it that are suboptimal (being ponies, not wireheading)

Can you elaborate? Do you mean that not being able to wirehead is suboptimal?

Comment author: blacktrance 06 November 2013 02:44:21AM *  4 points [-]

Yes. I think wireheading is the optimal state (assuming it can make me as happy as possible). I recognize this puts me at odds with an element of the LessWrong consensus.

Comment author: ChrisHallquist 04 November 2013 01:32:27AM *  5 points [-]

In honor of NaNoWriMo, I offer up this discussion topic for fans of HPMOR and rationalist fiction in general:

How many ways can we find that stock superpowers (magical abilities, sci-fi tech, whatever), if used intelligently, completely break a fictional setting? I'm particularly interested in subtly game-breaking abilities.

The game-breaking consequences of mind control, time travel, and the power to steal other powers are all particularly obvious, but I'm interested in things like e.g. Eliezer pointing out that he had to seriously nerf the Unbreakable Vow in HPMOR to keep the entire story from being about that.

Comment author: Armok_GoB 04 November 2013 02:54:42AM *  4 points [-]

I seem to be able to do this with almost any power to various degrees. Including ones I actually have, and ones that are common among humans. Any specifics you had in mind?

Really, ANY ability will reroll some chaotic stuff and be a valuable asset simply because it's rare. Even a deliberating curse, if rare and interesting enough, can do things like be useful for research or provide unique perspectives to be studied. So really, the only limit to where a power stops being useful is where it's only useful to someone else controlling you.

Hence, why anything properly rationalist that's not going to be largely about breaking the setting must do something like give MANY the ability so the low hanging fruit is already gone, or make it inherently mysterious and unreplicable, or have some deliberate intelligence preventing it from getting well known, or something like that.

Comment author: Kaj_Sotala 04 November 2013 04:44:09PM 4 points [-]
Comment author: hyporational 06 November 2013 03:47:05AM 6 points [-]

I wonder if there's research that rationalists should do that could be funded this way. I'd pay for high quality novel review articles about topics relevant to lw.

Comment author: ChristianKl 08 November 2013 05:04:08PM 1 point [-]

How about computer games that teach rationality skills?

Comment author: hyporational 09 November 2013 09:21:49PM 1 point [-]

That fruit doesn't hang low enough, I think.

Comment author: gwern 03 November 2013 05:05:05PM 4 points [-]

Incidentally, I'm making a hash precommitment:

43a4c3b7d0a0654e1919ad6e7cbfa6f8d41bcce8f1320fbe511b6d7c38609ce5a2d39328e02e9777b339152987ea02b3f8adb57d84377fa7ccb708658b7d2edc

See http://www.reddit.com/r/DarkNetMarkets/comments/1pta82/precommitment/

Comment author: Azathoth123 07 November 2014 05:11:05AM 3 points [-]

Well, it's been a year. When can we expect this to be revealed?

Comment author: gwern 08 November 2014 04:00:48AM 1 point [-]

Already has been, see Reddit.

Comment author: Lumifer 08 November 2014 04:41:51AM 5 points [-]
Comment author: Adele_L 08 November 2014 04:33:34AM *  2 points [-]

What was the string that generated the hash, then?

ETA: See Lumifer's link above.

Comment author: Douglas_Knight 04 November 2013 12:24:00AM 2 points [-]

It seems to me that a relevant detail is time frame is ~7 months (as you say elsewhere). Ideally, hashes would be commitments to reveal the plaintext in a specified time. Don't you discuss this somewhere?

Comment author: fubarobfusco 03 November 2013 06:22:22PM 2 points [-]

43a4c3b7d0a0654e1919ad6e7cbfa6f8d41bcce8f1320fbe511b6d7c38609ce5a2d39328e02e9777b339152987ea02b3f8adb57d84377fa7ccb708658b7d2edc

Looking forward to this one ...

Comment author: [deleted] 03 November 2013 09:24:04AM 4 points [-]

Do you think there should be a new LW survey soon?

Submitting...

Comment author: gwern 03 November 2013 05:04:13PM 26 points [-]

If Yvain is (understandably) too busy to run it this year, I am willing to do it. But I will be making changes if I do it, including reducing the number of free responses and including a basilisk question.

Comment author: Yvain 04 November 2013 12:53:14AM 14 points [-]

Give me a few days to see if I can throw something together and otherwise I will turn it over to your capable hands (reluctantly; I hate change).

Comment author: [deleted] 04 November 2013 09:24:36PM 1 point [-]

Have you started doing modafinil or something by any chance?

Comment author: CAE_Jones 07 November 2013 11:03:50PM 2 points [-]

I'm a bit emotionally tense at the moment, so this observation might not be as valuable as it seems to me, but it occurs to me that there are two categories of things I do: thinking things through in detail, and acting on emotion with very little forethought involved. The category that we want--thinking an action through, then performing it--is mysteriously absent.

It's possible to get around this to some extent, but it requires the emotionally-driven, poorly-thought out things to involve recurring or predictable stimuli. In those cases, I can think through and commit to a more rational plan during the intermediate time of inaction. Drama happens either when an emotionally-charged situation appears unexpectedly, or when I need to carry out some plan I've thought through but can't generate the emotional charge.

I can't really bluff my own hardware well enough to combat either end of the spectrum, but if there's some way to make conscientiousness and intelligence play nice together, that'd be nice.

Comment author: niceguyanon 07 November 2013 08:44:26PM 2 points [-]

Beeminder users, did you pledge? Do you find that it works better if you do?

Comment author: Ben_LandauTaylor 10 November 2013 07:43:54AM 0 points [-]

Yes and yes.

If you're already beeminding without the pledge and it's not working perfectly, I'd suggest trying a small pledge for the value of information.

Comment author: TsviBT 03 November 2013 08:58:44PM 2 points [-]

A way to fall asleep and/or gain gut intuition for "exponentially slow": count in binary, in your head, at a regular beat. YMMV.

Comment author: passive_fist 02 November 2013 07:49:34PM 2 points [-]

Here's a more difficult version of the AI box experiment. I haven't seen this particular version anywhere, but I'd be pleased to be proven wrong.

Imagine we've come up with a very intelligent AI that is free to manipulate the environment and uses the action-reward system like Hutter's AIXI. Also imagine that we've somehow figured a way to make the rewards very hard to counterfeit (perhaps we require the rewards to be cryptographically signed). It's clear that in such a system, the 'weak point' would be the people in control of the private key. In this case the AI will not attempt to modify its own reward system (to see why, look at Hutter's AIXI book, where he discusses this in some detail).

How could such an AI convince someone to hand over the encryption key? Note that it can't promise things like e.g. ending human suffering, because it already has the means to do that (it is 'free') as well as the incentive (obtaining reward).

Comment author: Adele_L 02 November 2013 07:58:03PM 5 points [-]

How could such an AI convince someone to hand over the encryption key? Note that it can't promise things like e.g. ending human suffering, because it already has the means to do that (it is 'free') as well as the incentive (obtaining reward).

Torture is probably the easiest way. Another way could be to examine the key-keeper's mind and extract the key directly from that, but this is needlessly complicated.

Comment author: Eliezer_Yudkowsky 02 November 2013 08:42:48PM 4 points [-]

Torture might stand too great a chance of destroying the encryption key. Though I suppose if nanotech were sufficiently difficult to obtain, the possible key-destructive effects of torture might be balanced against the probability of a car running over the keyholder in the meantime.

Comment author: NancyLebovitz 02 November 2013 10:18:32PM 5 points [-]

I would think that confusion (set things up so the key-keeper is confused and distracted, then do some phishing) is in the same reliability range as torture, and less likely to get the AI in trouble.

Comment author: ChristianKl 03 November 2013 07:54:27AM 2 points [-]

The first step would start by starting to control information flow to the people who hold the key. Only tell the person about all the good thing the AI does, and hide evidence of any wrongdoing that might reduce the rewards that the AI gets.

The second step is to break the security which gets used to protect the key. Install a keylogger.

Comment author: solipsist 05 November 2013 03:27:44AM *  1 point [-]

I don't understand how this encryption would work. What do people physically do to reward the AI, and how do you ensure that only people can do that? Would humans compute RSA signatures in their head? Would humans typing reusable passwords onto a "secure" reward computer that is "outside the AI's control"? Do humans precompute and memorize a finite number of one-time reward phrases before the AI is turned on, and reward the AI by uttering a phrase aloud?

In the precomputed, one-time cookie case, I'd just make the human think about the reward phrase. I'm sure humans leak thoughts like a sieve through subvocalization, nerve impulses, etc.

Comment author: passive_fist 05 November 2013 04:10:05AM 0 points [-]

What I had in mind was the reward being administered through a consensus cryptography system, perhaps via some elected board or somesuch, but I really didn't give that aspect of the problem much thought. If the key is distributed, the AI would have to extract it from each individual holding a part of it.

This in itself is an interesting problem imo, and if a good solution is found it might have important implications for FAI research.

Comment author: solipsist 05 November 2013 06:07:38AM *  3 points [-]

It's clear that in such a system, the 'weak point' would be the people in control of the private key.

If the AI is out of the box, I don't think humans are the weak point.

Humans physically do something when they reward the AI. To get a reward, the AI has only to figure out what the humans would physically do and mimic that itself. If the human reward the AI by pressing a big red button, then the AI can just kill the human and press the big red button itself. It wouldn't matter if the big red button uses 512 bit elliptic curve cryptography -- the AI just has to find a paperweight and put it on the button.

If humans can perform RSA encryption silently in their heads, then you might be on to something. A human could memorize a private key and produce a cryptographically signed reward for the AI when the human deemed the AI worthy. The AI would not know the private key, would not be able to produce signed rewards, and would not be able to mimic humans. This setup works because it is cryptographically difficult to mimic a human doing RSA. But if the human did not perform the cryptography silently in their head, it would not be cryptographically difficult to mimic their rewarding behavior.

But I doubt that humans could perform RSA or elliptic curve cryptography mentally. Unless humans can compute trapdoor functions silently in their head, I don't see how public key cryptography could buy you anything over a reward-button.

Comment author: passive_fist 05 November 2013 07:32:10AM *  0 points [-]

For the same reason that Linus Torvalds is not able to hack every single Linux system in existence, it is reasonable to assume that the probability of 'backdoor'-type attacks could be reduced or eliminated.

If you're not talking about a back door, then I'm not sure what you're trying to say here. Are you implying that the AI will find the 'reward producing machine' and somehow use it to produce rewards? It doesn't work that way because the machine would still need the key to produce a cryptographically-valid reward, and the key would not be stored on the machine. That's the whole point of using cryptography. For instance, you could do something like http://en.wikipedia.org/wiki/Secret_sharing where the key is divided among several participants and never actually assembled in one place.

Of course any cryptography system has vulnerabilities. The point is not to build a 100% secure system. The point is to make it so that attempting to hack the system has less expected utility than simply doing what the humans say. And if that fails, then the AI will attempt to hack the system using its weakest point: the people controlling the key. Hence my question.

Comment author: solipsist 05 November 2013 02:48:44PM *  3 points [-]

If you're not talking about a back door, then I'm not sure what you're trying to say here.

Yeah, we're talking past each other. I think I understand what you're saying, and I'll try to rephrase what I'm saying.

The AI is out. It is free to manipulate the world at its will. Sensors are everywhere. The AI can hear every word you say, feel every keystroke you make, and see everything you see. The only secrets left are the ones in your head.

How do humans reward the AI? You say "cryptographically", but cryptography requires difficult arithmetic. How do you perform difficult arithmetic on a secret that can't leave your head?

Comment author: passive_fist 05 November 2013 08:19:55PM 0 points [-]

Too many assumptions are being made here. What is the basis for believing the AI will have sensors everywhere, especially while it's still under human control? And if it has the ability to put clandestine sensors in even the most secure locations, why couldn't it plant clandestince brain implants in the people controlling the key?

Comment author: fubarobfusco 06 November 2013 05:24:02PM 3 points [-]

http://www.refsmmat.com/statistics/

Statistics Done Wrong is a guide to the most popular statistical errors and slip-ups committed by scientists every day, in the lab and in peer-reviewed journals. Many of the errors are prevalent in vast swathes of the published literature, casting doubt on the findings of thousands of papers. Statistics Done Wrong assumes no prior knowledge of statistics, so you can read it before your first statistics course or after thirty years of scientific practice.

Comment author: Tenoke 03 November 2013 10:42:09AM *  3 points [-]

Not particularly important ,but if anyone wants to come out and tell me why they went on a mass-downvoting spree on my comments, please feel free to do so.

Comment author: Tenoke 08 July 2014 07:17:43AM 0 points [-]

test

Comment author: Mitchell_Porter 07 November 2013 10:51:29PM 1 point [-]

Russell's teapot springs a leak... OK, that's enough one-liners for this week.

Comment author: pan 02 November 2013 05:48:11PM 1 point [-]

I've seen a few posts about the sequences being released as an ebook, is there a time frame on this?

I'd really like to get the ebook printed out by some online service so I can underline/write on them as I read through them.

Comment author: MathiasZaman 02 November 2013 05:54:43PM 4 points [-]

Doesn't this already exist? Or is this not what you meant?

I'm reading that pdf version on my phone and it looks fine.

Comment author: pan 03 November 2013 12:45:08AM 2 points [-]

From posts like this one I got the impression that they were being edited and released together in a possibly new order. Maybe I am mistaken?

Comment author: RomeoStevens 03 November 2013 07:02:29AM 4 points [-]

There was a plan to release two books. That was scrapped in favor of other uses of MIRI's time/resources.