Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: 75th 29 July 2014 10:48:24PM *  0 points [-]

We definitely don't know enough specifics about HPMoR-alchemy to come to any firm conclusions.

Does the "alchemical circle" that has to be so precise refer to just the containing circle itself, or to all the runes inside it, too? If the former, then the circle could be a permanent part of the room, while the runes are drawn (the earlier passage does say the Transfiguration studio's diagram was "drawn") slightly more crudely in some way that's erasable. If the latter, then,

Are there different runes for different alchemies, or is it always the same "board" that you perform different processes on top of? If the latter, then the whole room could be ready to go; if the former, then yeah, Harry may be out of luck.

I did some Googling about the history of alchemy, and the diagram I saw associated with the Philosopher's Stone in several places was a circle-inscribed-in-a-square-inscribed-in-a-triangle-inscribed-in-the-Circle. If Eliezer is consistent with that, then Harry's probably going to have to draw at least the runes on his own.

I do think that it makes more sense literarily for Harry to have to go through the trapped third-floor corridor to the room with the "magic mirror" rather than skipping it altogether. But as others have pointed out, if it is the Mirror of Erised and Dumbledore's scheme is the same as in canon, HPMoR-Harry probably won't qualify to receive the Stone, since he totally does want to use it, and (I hope) can't somehow make himself not want to use it in a way that satisfies Dumbledore's spell.

So maybe he'll get to the mirror, find himself flummoxed, and then proceed to go make one. I don't know.

Comment author: cousin_it 29 July 2014 10:26:45PM 0 points [-]

Yup. The simplest formulation I have is this one.

Comment author: shminux 29 July 2014 10:16:20PM *  1 point [-]

I don't think it's a good idea to get into a discussion on any forum where the term "mansplaining" is used to stifle dissent, even (or especially) if you have "a clear, concise, self-contained point".

Comment author: shminux 29 July 2014 10:04:19PM 0 points [-]

I tried to elaborate in another comment, suggesting that we should reject all hypotheses more complicated than our Solomonoff engine can handle.

Comment author: shminux 29 July 2014 10:02:21PM 0 points [-]

OK, I have thought about it some more. The issue is how accurately one can evaluate the probabilities. If the best you can do is, say, 1%, then you are forced to count even the potentially very unlikely possibilities at 1% odds. The accuracy of the probability estimates would depend on something like the depth of your Solomonoff induction engine. If you are confronted with a Pascal's mugger and your induction engine returns "the string required to model the mugger as honest and capable of carrying out the threat is longer than the longest algorithm I can process", you are either forced to use the probability corresponding to the longest string, or to discard the hypothesis outright. What I am saying is that the latter is better than the former.

Comment author: James_Miller 29 July 2014 09:44:33PM 0 points [-]

If it's true that the GT version gives you increased intelligence then there should be a dating service that matches TT with GG because their children would all be GT.

Comment author: Luke_A_Somers 29 July 2014 09:37:43PM *  0 points [-]

It depends why Salvati is bringing it up.

"If X(t), then A(t+delta). If A(t') then B(t'+delta')."

"But, not A(now)!"

Comment author: Velorien 29 July 2014 09:36:46PM 1 point [-]

On the other hand,

"Because all alchemical circles have to be drawn 'to the fineness of a child's hair', it isn't any finer for some alchemies than others.

strongly implies that different alchemical procedures require different circles. What are the odds that Dumbledore just happens to have the right circle for philosopher's stone creation ready, given that he has no desire for immortality, no special need for gold, and access to an existing philosopher's stone anyway?

Comment author: hairyfigment 29 July 2014 09:36:26PM 0 points [-]

"The term, Mr. Potter, is Avada Kedavra," Professor Quirrell's voice sounded a bit sharp for some reason

Possible reasons include:

  • He was Voldemort, and has some reason which seems good to him for not trying to AK Harry, then or now. The reminder that at least one Outside View says otherwise annoys him.
  • He wants Harry to learn AK, well enough to ensure the boy casts it (or genuinely tries rather than trying to try) in a moment of crisis, and thus wants him to practice the actual pronunciation or at least not practice mangling it.
  • Though the "for enemies" event hasn't happened yet, other events gave him a (possibly irrational) distaste for Muggle-borns who think "abracabara" is funny.
  • A sharp tone serves his goals for some reason.
Comment author: Luke_A_Somers 29 July 2014 09:33:36PM 0 points [-]

Not even that. It's the fraction of people who have known someone who thought they exercised too much at least once in their lives.

Comment author: Nornagest 29 July 2014 09:26:42PM *  1 point [-]

I don't see how being able to get something out of mindkilled people is the first step in making them less mindkilled. You got what you wanted and paid for it by reinforcing their beliefs -- why would they become more likely to change them?

You're not substantially reinforcing their beliefs. Beliefs entangled with your identity don't follow Bayesian rules -- directly showing anything less than overpoweringly strong evidence against them (and even that isn't a sure thing) tends to reinforce them by provoking rationalization, while accepting them is noise. If you don't like Christianity, you wouldn't want to use the Christian argument for charity with a weak or undecided Christian, but such a person is almost by definition not mindkilled in this regard, so it wouldn't make a good argument anyway.

On the other hand, sneaking new ideas into someone's internal memetic ecosystem tends to put stress on any totalizing identities they've adopted. For example, you might have to invoke God's commandment to love thy neighbor as thyself to get a fundamentalist Christian to buy EA in the first place, but now you've given them an interest in EA, which could (e.g.) lead them to EA forums sharing secular humanist assumptions. Before, they'd have dismissed this as (e.g.) some kind of pathetic atheist attempt at constructing a morality in the absence of God. But now they have a shared assumption, a point of commonality. That'll lead to cognitive dissonance, but only in the long run -- timescales you can't work on unless you're very good friends with this person.

That cognitive dissonance won't always resolve against Christianity, but sometimes it will.

Comment author: Luke_A_Somers 29 July 2014 09:19:22PM *  0 points [-]

1) Countries are really big. There are multiple layers of sub-community, providing for much more diversity even in a country that isn't about diversity. LW is multiple orders of magnitude smaller than Britain even with lurkers counted, and if we only count the regulars, the same can be said of magical Britain.

2) Countries don't have a specific purpose. Websites often do (including this one, used in the example). On a website, simply going off-topic badly can be a bannable offense (not here, yes). A country trying to do that is farcical.

3) The example given above, that I was responding to, was about someone who was let in to LW, did some bad things, and was banned. This is the equivalent of exile. It was targeted and in response to an existing wrong. It was not done proactively for a broad category of people who had not done anything wrong.

4) Speaking of those people not doing anything wrong, "don't, won't, or can't accept [the community's norms]" might be a legitimate reason, but it was not the criterion applied in the example, even approximately.

Comment author: PeerGynt 29 July 2014 09:15:52PM *  0 points [-]

You don't succeed in avoiding getting mind killed yourself. You switch for no reason towards real life.

Discussing the issue in terms of real life does not itself imply that I've been mindkilled (though it may increase the chance that the discussion ends up being subject to mindkill). If you think I have been mindkilled, please show me a specific instance where I used arguments as soldiers, or where I failed to update in response to a properly made argument.

General ethical consideration suggest that you only inflict pain on other humans if they consent.

That is a totally acceptable ethical view that is fully consistent with my parable. At no stage did I assert "Since we only care about Martians, it is acceptable for them to do anything they want to the Earthlings". Instead, I invited you to have discussion about what actions are ethical and which actions are not ethical. In such a discussion, one of the possible sides you can take is that the Martians should never tickle anyone without consent.

However, the real world implication of this assertion of it is that no man should attempt to interact with women unless they are certain that they are sufficiently high status to avoid seeming creepy.

(Note that I probably shouldn't have used "stinging pain" as an analogy for creepiness and social awkwardness. This was an overcompensation in order to avoid seeming biased in favor of men).

Comment author: Lumifer 29 July 2014 09:12:23PM 1 point [-]

That's a price you need to pay if you want to get something out of mindkilled people, which incidentally tends to be the first step in making them less mindkilled.

Maybe it's the price you need to pay, but I don't see how being able to get something out of mindkilled people is the first step in making them less mindkilled. You got what you wanted and paid for it by reinforcing their beliefs -- why would they become more likely to change them?

some kind of radical honesty policy

I am not going for radical honesty. What I'm suspicious of is using arguments which you yourself believe are bullshit and at the same time pretending to be a bona fide member of a tribe to which you don't belong.

And, by the way, there seems to be a difference between Jesus and SJ here. When talking to a Christian I can be "radically honest" and say something along the lines "I myself am not a Christian but you are and don't you recall how Jesus said that ...". But that doesn't work with SJWs -- if I start by saying "I myself don't believe in while male oppression but you do and therefore you should conclude that...", I will be immediately crucified for the first part and no one will pay any attention to the second.

Comment author: Nornagest 29 July 2014 08:55:44PM *  0 points [-]

If you're using the Christian argument for charity to talk effective altruism, you are implicitly accepting the authority of Jesus.

Yes, you are. That's a price you need to pay if you want to get something out of mindkilled people, which incidentally tends to be the first step in introducing outside ideas and thereby making them less mindkilled. Reject it in favor of some kind of radical honesty policy, and unless you're very lucky and very charismatic you'll find yourself with no allies and few friends. But hey, you'll have the moral high ground! I hear that and $1.50 will get you a cup of coffee.

(My argument in the ancestor wasn't really about fighting the white male patriarchy, though; the rhetoric about that is just gingerbread, like appending "peace be upon him" to the name of the Prophet. It's about the importance of subjective experience and a more general contrarianism -- which are also SJ themes, just less obvious ones.)

Comment author: ChristianKl 29 July 2014 08:54:52PM 0 points [-]

The point I was trying to make is that, while I see females as agents in real life, in this analogy I am discussing the ethics of a choice that is only made by men.

You don't succeed in avoiding getting mind killed yourself. You switch for no reason towards real life.

For any of those things, if you give me a specific reason why it is relevant to the choice made by the Green Martians, then it certainly should have been part of the analogy.

General ethical consideration suggest that you only inflict pain on other humans if they consent. A doctor will only operate on a patient if the patient consents, even if the doctor believes that a decision to not consent is bad for the patient given the stated preferences of the patient. Respecting that decision means respecting the agentship of the patient.

That's even true for decisions such as whether to get vaccinated where herd immunity is a concern. No single person if forced to feel pain by getting vaccinate for the good of the group.

Comment author: 75th 29 July 2014 08:50:30PM *  1 point [-]

/u/solipsist, in another comment on this thread:

Do not try to obtain Sstone yoursself. I forbid.

This was said by Quirrell in Parseltongue. If you can only tell the truth in Parseltongue, then Quirrell was really forbidding Harry from obtaining the stone himself.

If Quirrell can't lie in Parseltongue (and not just Harry, since Harry's speaking as a standard Parselmouth but Quirrell is speaking as a sentient snake), and if that prohibition enforces the sincerity of imperative commands and not just declarative statements, then clearly what Quirrell is saying is that Harry should try to make his own Philosopher's Stone.

"It's not a secret." Hermione flipped the page, showing Harry the diagrams. "The instructions are right on the next page. It's just so difficult that only Nicholas Flamel's done it."


"Well, it can't work," Hermione said. She'd flown across the library to look up the only book on alchemy that wasn't in the Restricted Section. And then - she remembered the crushing letdown, all the sudden hope dissipating like mist. "Because all alchemical circles have to be drawn 'to the fineness of a child's hair', it isn't any finer for some alchemies than others. And wizards have Omnioculars, and I haven't heard of any spells where you use Omnioculars to magnify things and do them exactly.

So the first thing Hermione mentions as a limitation of doing alchemy is the insane precision of the circle you have to draw. But what if there were already an acceptable, permanent alchemy setup just lying around somewhere where Harry could get to it?

The three of them stood within the Headmaster's private Transfiguration workroom, where the shining phoenix of Dumbledore's Patronus had told her to bring Harry, moments after her own Patronus had reached him. Light shone down through the skylights and illuminated the great seven-pointed alchemical diagram drawn in the center of the circular room, showing it to be a little dusty, which saddened Minerva. Transfiguration research was one of Dumbledore's great enjoyments, and she'd known how pressed for time he'd been lately, but not that he was this pressed.

Comment author: Lumifer 29 July 2014 08:44:49PM 1 point [-]

The point isn't to mimic their rhetoric, it's to talk their language

There is a price: to talk in their language is to accept their framework. If you are making an argument in terms of fighting the oppression of white male patriarchy, you implicitly agree that the white male patriarchy is in the business of oppression and needs to be fought. If you're using the Christian argument for charity to talk effective altruism, you are implicitly accepting the authority of Jesus.

Comment author: Nornagest 29 July 2014 08:25:32PM *  0 points [-]

True, but you don't do that by mimicking their rhetoric.

The point isn't to blindly mimic their rhetoric, it's to talk their language: not just the soundbites, but the motivations under them. To use your example, talking about letting Jesus into your heart isn't going to convince anyone to donate a large chunk of their salary to GiveWell's top charities. There's a Christian argument for charity already, though, and talking effective altruism in those terms might well convince someone that accepts it to donate to real charity rather than some godawful sad puppies fund; or to support or create Christian charities that use EA methodology, which given comparative advantage might be even better. But you're not going to get there without understanding what makes Christian charity tick, and it's not the simple utilitarian arguments that we're used to in an EA context.

Comment author: Lumifer 29 July 2014 08:12:54PM 2 points [-]

And we should probably start by defining "genetically modified".

Under some definitions almost all commercial crops and farm animals are genetically modified. There are no wild cows and Golden Delicious apples don't grow on trees in forests.

Comment author: Squark 29 July 2014 08:07:03PM 0 points [-]

This is not how bounded utility functions work. The fact it's bounded doesn't mean it reaches a perfect "plateau" at some point. It can approach its upper bound asymptotically. For example, a bounded paperclip maximizer can use the utility function 1 - exp(-N / N0) where N is the "number of paperclips in the universe" and N0 is a constant.

Comment author: Squark 29 July 2014 08:03:54PM 0 points [-]

But we're not bargaining. This works even if we never meet.

Yeah, which would make it acausal trade. It's still bargaining in the game theoretic sense. The agents have a "sufficiently advanced" decision theory to allow them to reach a Pareto optimal outcome (e.g. Nash bargaining solution) rather than e.g. Nash equilibrium even acausally. It has nothing to do with "respecting agency".

Comment author: Squark 29 July 2014 07:55:30PM 0 points [-]

I don't understand what you mean by "utility bound". A bounded utility function is just a function which takes values in a finite interval.

Comment author: Squark 29 July 2014 07:53:03PM 0 points [-]

I do think we are discussing the choice to buy "natural food" and prefer it over genetically modified food.

In this case, an in-depth discussion would require analyzing the potential health hazards given what we know about the genetic modifications involved and the regulations in place, versus the cost difference and some way to compare the two.

Comment author: Lumifer 29 July 2014 07:45:52PM 0 points [-]

Because we occasionally might want to convince them of things, and we can't do that without understanding what they want to see in an argument.

So, um, if you really let Jesus into your heart and accept Him as your personal savior you will see that He wants you to donate 50% of your salary to GiveWell's top charities..?

it behooves us to get better at modeling people that don't share our epistemology or our (at least, my) contempt for politics.

True, but you don't do that by mimicking their rhetoric.

Comment author: Creutzer 29 July 2014 07:41:13PM *  1 point [-]

A: If John comes to the party, Mary will be happy. (So there is a chance that Mary will be happy.)

B: But John isn't going to the party. (So your argument is invalid.)

Comment author: Jiro 29 July 2014 07:35:36PM *  1 point [-]

One supporting argument for this is that all natural or GM-free products tend to be more expensive or less satisfying than others, demonstrating less optimisation pressure.

Either that, or it's just plain old market segmentation and price discrimination which extracts more money from the more wealthy people who would buy such foods.

Comment author: Nornagest 29 July 2014 07:31:53PM *  1 point [-]

However what most of them are is mindkilled. They won't update so why bother?

Because we occasionally might want to convince them of things, and we can't do that without understanding what they want to see in an argument. Or, more generally, because it behooves us to get better at modeling people that don't share our epistemology or our (at least, my) contempt for politics.

Comment author: asd 29 July 2014 07:31:42PM *  0 points [-]

I think this has the same problem than any kind of self-conditioning. I watched the video and the social community and gaming thing seem actually motivating, but I'm not sure about the punishment because you can always take the wristband off. Maybe there's a commitment and social pressure not to take the wristband off, but ultimately you yourself are responsible for keeping the wristband on your wrist and this is basically self-conditioning. Yvain made a good post about it.

Suppose you have a big box of candy in the fridge. If you haven’t eaten it all already, that suggests your desire for candy isn’t even enough to reinforce the action of going to the fridge, getting a candy bar, and eating it, let alone the much more complicated task of doing homework. Yes, maybe there are good reasons why you don’t eat the candy – for example, you’re afraid of getting fat. But these issues don’t go away when you use the candy as a reward for homework completion. However little you want the candy bar you were barely even willing to take out of the fridge, that’s how much it’s motivating your homework.

If the zap had any kind of motivating effect, wouldn't that effect firstly be directed towards taking the wristband off your wrist and not the much more distant and complex sequence of actions like going to the gym? I don't think small zap on its owns could motivate me to do even anything simple, like leaving the computer. Also, I agree with Yvain that rewards and punishments seem only have real effect when they happen unpredictably.

Comment author: Lumifer 29 July 2014 07:29:40PM 0 points [-]

That line was somewhat tongue-in-cheek.

Of course, but only somewhat :-)

these people aren't stupid

"These people" are not homogenous and there are a lot of idiots among them. However what most of them are is mindkilled. They won't update so why bother?

Comment author: solipsist 29 July 2014 07:27:04PM 0 points [-]

I hope you are right.

Comment author: Jiro 29 July 2014 07:26:52PM 0 points [-]

Let's assume that the odds you assign of the person telling the truth is greater than 1/3^^^^3. One thing that is clear is that if you faced that decision 3^^^^3 times, each decision independent from the others... then you should pay each time. When you aggregate independent decisions, it narrows your total variance, forcing you closer to an expected utility maximiser (see this post).

The odds I assign to the person telling the truth are themselves uncertain. I can't assign odds accurately enough to know if it's 3^^^^3, or a millionth of that or a billion times that.

Now, one typical reply to "the odds I assign are themselves uncertain" is "well, you can still compute an overall probability from that--if you think that the odds are X with probability P1 and the odds are Y with probability 1-P1, you should treat the whole thing as having odds of X * P1 + Y * (1-P1)".

But that reply fails in this situation. In that case, if you face 3^^^^3 such muggings, then if the odds the first time are P1, the odds the second time are likely to be P1 too. In other words, if you're uncertain about exactly what the odds are, the decisions aren't independent and aggregating the decision doesn't reduce the variance, so the above is correct only in a trivial sense.

Furthermore, the problem with most real-life situations that even vaguely resemble a Pascal's mugging is that the probability that the guy is telling the truth depends on the size of the claimed genocide, in ways that have nothing to do with Kolomogorov complexity or anything like that. Precisely because a higher value is more likely to make a naive logician willing to be mugged, a higher value is better evidence for fraud.

Comment author: Nornagest 29 July 2014 07:19:39PM *  1 point [-]

That line was somewhat tongue-in-cheek. I wouldn't go that far over the top in a real discussion, although I might throw in a bit of anti-*ist rhetoric as an expected shibboleth.

That being said, these people aren't stupid. They don't generally have the same priorities or epistemology that we do, and they're very political, but that's true of a lot of people outside the gates of our incestuous little nerd-ghetto. Winning, in the real world, implies dealing with these people, and that's likely to go a lot better if we understand them.

Does that mean we should go out and pick fights with mainstream social justice advocates? No, of course not. But putting ourselves in their shoes every now and then can't hurt.

Comment author: solipsist 29 July 2014 07:14:21PM 0 points [-]

How about this stipulation: if the name Baba Yaga does not appear in the book again, and Eliezer Yudkowsky does not Word of God that Mr. Hat & Cloak is Baba Yaga, you win. I'm expecting Baba Yaga to be important, and if she's not mentioned again or only mentioned offhandedly I will likely concede.

We are on the same page about payments.

Comment author: Lumifer 29 July 2014 07:00:41PM 0 points [-]

I don't think reinforcing stupidity is a good idea.

“Never argue with stupid people, they will drag you down to their level and then beat you with experience.” ― Mark Twain

This is that level:

helps people escape middle-class patriarchal white Western consumer culture's relentless focus on immediate short-term gratification

Comment author: lmm 29 July 2014 06:57:27PM 0 points [-]

I think the most likely scenario is that we'll hear no more about either of them. So I want to win that case; if the two are unrelated then I doubt we'll hear anything explicit to that effect.

Rather than worry about what's "cheap" let's just say the loser pays $50 gross, any payment fees or the like can come out of the winnings. And I'll stipulate that anything bitcoiny qualifies as sketchy.

And yeah, $50 is an amount that I can comfortably toss for online entertainment without checking my bank balance.

Comment author: Toggle 29 July 2014 06:57:01PM *  0 points [-]

Well, I definitely agree that we should make non-super intelligent AIs for study, and also for a great many other reasons. But it's perhaps less clear what 'too stupid to foom' actually means for an AGI. There was a moment when a hominid brain crossed an invisible line and civilization became possible; but the mutation precipitating that change may not have obviously been a major event from the perspective of an outside observer. It may just have looked like another in a sequence of iterative steps. Is the foom line in about the same place as the agriculture line? Is it simpler? Harder?

On the other hand, it's possible to imagine an experimental AGI with values like "Fulfill [utility function X] in the strictly defined spatial domain of Neptune, using only materials that were contained in the gravity well of Neptune in the year 2000, including the construction of your own brain, and otherwise avoid >epsilon changes to probable outcomes for the universe outside the domain of Neptune." Then fill in whatever utility function you'd like to test; you could try this with each new iteration of AGI methodology, once you are actionably worried about the possibility of fooming.

Comment author: Lumifer 29 July 2014 06:54:08PM 0 points [-]

is there evidence that conventional foods (or foods that are not organic) have adverse effects beyond possible nutritional differences, when compared to organic foods, and genetically modified vs. not modified?

Not to my knowledge.

Comment author: IlyaShpitser 29 July 2014 06:48:11PM *  0 points [-]

"Statisticians" is a pretty large set.


I still don't understand your original "because." I am talking about modeling the truth, not modeling what humans do.


[ edit: did not downvote. ]

Comment author: DanielLC 29 July 2014 06:39:30PM 0 points [-]

Do you take into account the possibility that you miscounted, or are hallucinating, or any of the other events that are far more likely explanations than that it comes up heads with probability 49% and it came up heads that often just by chance?

Comment author: John_D 29 July 2014 06:37:40PM *  0 points [-]

Misnomer noted. So, is there evidence that conventional foods (or foods that are not organic) have adverse effects beyond possible nutritional differences, when compared to organic foods, and genetically modified vs. not modified? (and by not modified I mean not genetically modified, if the context preceding the words didn't make those words crystal clear) I am of course open to the possibility, but I would like to see harder evidence before paying a premium.

Comment author: Nornagest 29 July 2014 06:32:50PM *  2 points [-]

Don't think that'd work. Traditional practices and attitudes are a sacred category in this sort of discourse, but that doesn't mean they're unassailable -- it just means that any sufficiently inconvenient ones get dismissed as outliers or distortions or fabrications rather than being attacked directly. It helps, of course, that in this case they'd actually be fabrications.

Focusing on feelings is the right way to go, though. This probably needs more refinement, but I think you should do something along the lines of saying that exercise makes you feel happier and more capable (which happens to be true, at least for me), and that bringing tangible consequences into the picture helps people escape middle-class patriarchal white Western consumer culture's relentless focus on immediate short-term gratification (true from a certain point of view, although not a framing I'd normally use). After that you can talk about how traditional cultures are less sedentary, but don't make membership claims and do not mention outcomes. You're not torturing yourself to meet racist, sexist expectations of health and fitness; you're meeting spiritual, mental, and incidentally physical needs that the establishment's conditioned you to neglect. The shock is a reminder of what they've stolen from you.

You'll probably still get accusations of internalized kyriarchy that way, but it ought to at least be controversial, and it won't get you accused of mansplaining.

Comment author: IlyaShpitser 29 July 2014 06:32:35PM 0 points [-]

It's a first contact situation. You need to establish basic things first, e.g. "do you recognize this is a sequence of primes," "is there such a thing as 'good' and 'bad'," "how do you treat your enemies," etc.

Comment author: Manfred 29 July 2014 06:32:30PM 0 points [-]

I have us being different by a factor of 10^40, but yeah, that's a bit surprising. Maybe we're far enough out in the tails that the normal approximation is breaking down?

Comment author: drethelin 29 July 2014 06:24:31PM 0 points [-]

If you're making AI for study it shouldn't be super-intelligent at all, ideally it should be dumber than you. I can imagine an AGI that can usefully perform some tasks but is too stupid to self-modify itself into fooming if constrained. You can let it be in charge of opening and closing doors!

Comment author: Lumifer 29 July 2014 06:14:35PM *  0 points [-]

Modified food may or may not have adverse effects

"Organic" and "non-modified" are very different things.

"Organic" means that the food producer has received a particular kind of certification for his production. By the way, in this context the opposite of "organic" is "conventional", not "inorganic".

"Non-modified" has a less well-defined meaning, but generally it means food as it comes from the farm, not from a factory.

There is lots of "organic modified" and "conventional non-modified" food.

Comment author: John_D 29 July 2014 06:03:25PM *  0 points [-]

Are we trying to find out if organic foods are more nutritious, or if organic foods offer health benefits beyond nutrition? (or to reverse that, do inorganic foods offer adverse effects beyond nutrition) Remember I said , " Modified food may or may not have adverse effects beyond different nutrient contents (which so far is debatable)," The authors conclude in your 2nd link that they agree the evidence on the benefits of organic foods is scant at the moment.

Comment author: Lumifer 29 July 2014 05:40:20PM 1 point [-]

Here is one meta-study. Here is another one

Comment author: gwern 29 July 2014 05:40:18PM 0 points [-]

1968? Seriously?

Comment author: gwern 29 July 2014 05:31:18PM 2 points [-]

I didn't really intend to discuss this any further (because it's not like I care in the least about Twilight or 50 Shades of Gray qua Twilight/50SoG), but a random link on Reddit turned out to be relevant and give some more of the backstory, which if accurate explains a lot: http://www.reddit.com/r/TwoXChromosomes/comments/2byz2l/many_women_do_not_agree_with_me_on_this_subject/cjaqvmi

...FSOG got a shitload of karma. Ask me how! Well, the short of it: Erika [Leonard James / E.L. James] is a marketing professional. The long of it:

  • Erika made reposts of already-proven-popular content
  • Erika posted short updates to the story very frequently, keeping it at the top of the story search list
  • Since people could give 'karma' (reviews) for every single chapter/update, the more chapters a story had, the more karma it had

FSOG had 80 [edit: was actually 110] chapters. That means that a lot of people actually reviewed that fucking thing EIGHTY times. So even if she had only 100 super loyal readers, that's 8,000 [edit: actually 11,000] reviews (think upvotes). People see a story with 8,000 reviews and want to click it to see what all the fuss is about. I think it had something like 20,000 reviews when it was pulled down for publishing.

Hence, FSOG went viral.

To put into perspective the social power of the Twilight fanfic community, consider this:

There was a fandom-run charity auction to benefit pediatric cancer research. These auctions, held annually, lasted 1 week. That's it. Just 7 days. Mostly authors would auction off stories. So if you donated in my name, I'd write you 10,000 words of porn in my Tattward universe, or something new, etc. That's how it worked.

  • The 2009 auction raised $80,000.
  • The 2010 auction raised $140,000.
  • The 2011 auction raised $20,00.

This charity has raised more than $230,000 in 3 weeks. http://www.alexslemonade.org/mypage/19842

Erika participated in the 2010 auction. A story from her fanfic (FSOG) raised $30,000 of that, all by itself. In some chats made public by another author (that's some quality drama: http://gentleblaze.livejournal.com/), Erika freely admits to not wanting to participate in the charity at all, but felt pressured to do so by her readers.

...(Edit: Another fun fact! Erika's going to publish that story she wrote for the charity auction, for profit.)

But now, with the ability to connect the social power of the community with a monetary sum of her story's worth, Erika shortly thereafter decided to publish.

She then leveraged the community's sense of nostalgia and loyalty, urging everyone to buy the book and give it good ratings, so as to see 'one of their own succeed in the publishing world'. There were multiple campaigns from her friends (tens of thousands of what she only saw herself as 'fans') to blast her Amazon page and send the book up the ranks. It of course worked.

Once a (genre fiction) book gets to #1 on Amazon's bestseller list, you're done. Mission accomplished. Book and movie deals to follow. Enjoy your money.

...There's also a great reason why the 2011 charity auction made so much less money. Because after everyone saw Erika publish FSOG and make bank, they all wanted to do the same. Not really many popular stories left to leverage social currency--it's all going into their pockets. Most of those really popular fics (including the two mentioned here [The Submissive and Clipped Wings]) have since been published and done quite well.

...Seriously, Twilight fandom got really crazy big for a few years there. It was not totally uncommon to get multi-million clicks on a semi-popular story. It's weird looking back on it and calling it "Twilight fandom" because it was really more like "Romance Novel fandom"

...Actually, the fandom's pretty much dead now compared to how it used to be. After FSOG's success and everyone started publishing their own fanfic, stories would only stay online for as long as it took the author to complete them, then they'd take them away (sometimes they'd even post half and ask people to buy the book to get the ending), so people were either wary of reading new stories, or just didn't have any old ones around to read. Then you also get authors who come to the fandom and post their original novels, with the names changed to Edward and Bella, get a bunch of reviews and recognition, then publish it for pay.

Also, Twilight fandom now has multiple micro-publishers. Basically sites that used to archive fanfic now also publish 'books'. What they do is keep an eye on what stories get popular on their archives, then go to the author and offer to publish it for them. They slap a shitty cover on it, do minimal editing (change the identifiable Twilight names) and then take a significant portion of the profits.

The whole community is one giant scam these days.

Comment author: John_D 29 July 2014 05:27:47PM *  0 points [-]

A place to start is to feed two groups of animals foods, one eating organic and the other eating inorganic, with identical or near-identical nutrient compositions, and see how they respond over time. Linking dietary effects between animal and human models has been done in the past, so it isn't too far-fetched. It won't be perfect, since the animals won't be humans, but it is certainly better than the paucity of data available, and assuming that organic = good with scarce evidence (see below).

http://ajcn.nutrition.org/content/92/1/203.short

Comment author: niceguyanon 29 July 2014 05:26:48PM 1 point [-]

Check the comments near the bottom. Not the pet pharmacy link.

Comment author: Lumifer 29 July 2014 05:25:20PM *  -1 points [-]

I imagine it could be interfaced to an app that could give shocks under all manner of chosen conditions.

Classic bash.org :-D

#4281 +(27833)- [X]
<Zybl0re> get up
<Zybl0re> get on up
<Zybl0re> get up
<Zybl0re> get on up
<phxl|paper> and DANCE
* nmp3bot dances :D-<
* nmp3bot dances :D|-<
* nmp3bot dances :D/-<
<[SA]HatfulOfHollow> i'm going to become rich and famous after i invent a device
that allows you to stab people in the face over the internet
Comment author: Benito 29 July 2014 05:23:06PM *  0 points [-]

Oops. Did I mess something up?

Comment author: Benito 29 July 2014 05:22:14PM 1 point [-]

Yes, you're right. I didn't like the change that's all, and was hoping for a majority to back me. But if anyone wants to do that, that would certainly be a good idea

Comment author: Stuart_Armstrong 29 July 2014 05:20:12PM 0 points [-]

Goldberg, Lewis R. "Simple models or simple processes? Some research on clinical judgments." American Psychologist 23.7 (1968): 483.

Comment author: Toggle 29 July 2014 05:05:50PM 0 points [-]

I tried to think of the most harmless thing. Something I loved from my childhood. Something that could never ever possibly destroy us.

A thought occurred to me a while back. Call it the "Ghostbusters" approach to the existential risk of AI research. The basic idea is that rather than trying to make the best FAI on the first try, you hedge your bets. Work to make an AI that is a)unlikely to disrupt human civilization in a permanent way at all, and b)available for study.

Part of the stress of the 'one big AI' interpretation of the intelligence explosion is the sense that we'd better get it right the first time. But on the other hand, surely the space of all nonthreatening superintelligences is larger than the space of all helpful ones, and a comparatively easier target to hit on our first shot. You're still taking a gamble. But minimizing this risk seems much easier when you are not simultaneously trying to change human experience in positive ways. And having performed the action once, there would be a wealth of new information to inform later choices.

So I'm trying to decide if this is obviously true or obviously false: p(being destroyed by a primary FAI attempt) > p(being destroyed by a "Ghostbusters" attempt) * p(being destroyed by a subsequent more informed FAI attempt)

Comment author: Coscott 29 July 2014 05:04:23PM 0 points [-]

My general sense is that this is a fairly distinctive quality of social justice communities, so your feeling of alienation may have as much to do with the social justice community as it does with the LW memeplex.

I am very curious to what extent this is true, and would appreciate any evidence people have in either direction.

What is the cause of this? Is it just random fluctuation in culture that reinforce themselves? Perhaps I do not notice these problems in non social justice people just because they do not have an issue they care enough about to argue in this way. Perhaps, It is just availability bias as I spend too much time reading things social justice people say. Perhaps it is a function of the fact that the memes they are talking have this idea that they are being oppressed which makes them more fearful of outsiders.

Comment author: PeerGynt 29 July 2014 05:03:28PM 0 points [-]

Sure. The point I was trying to make is that, while I see females as agents in real life, in this analogy I am discussing the ethics of a choice that is only made by men. The analogy therefore did not require a fully specified model of females as agents.

There are many true things in the world that I chose not to specify in the analogy. For any of those things, if you give me a specific reason why it is relevant to the choice made by the Green Martians, then it certainly should have been part of the analogy. However, there is no law of nature that says "females should always be fully specified as agents in any analogy"

Comment author: othercriteria 29 July 2014 05:02:05PM *  0 points [-]

What I was saying was sort of vague, so I'm going to formalize here.

Data is coming from some random process X(θ,ω), where θ parameterizes the process and ω captures all the randomness. Let's suppose that for any particular θ, living in the set Θ of parameters where the model is well-defined, it's easy to sample from X(θ,ω). We don't put any particular structure (in particular, cardinality assumptions) on Θ. Since we're being frequentists here, nature's parameter θ' is fixed and unknown. We only get to work with the realization of the random process that actually happens, X' = X(θ',ω').

We have some sort of analysis t(⋅) that returns a scalar; applying it to the random data gives us the random variables t(X(θ,ω)), which is still parameterized by θ and still easy to sample from. We pick some null hypothesis Θ0 ⊂ Θ, usually for scientific or convenience reasons.

We want some measure of how weird/surprising the value t(X') is if θ' were actually in Θ0. One way to do this, if we have a simple null hypothesis Θ0 = { θ0 }, is to calculate the p-value p(X') = P(t(X(θ0,ω)) ≥ t(X')). This can clearly be approximated using samples from t(X(θ0,ω)).

For composite null hypotheses, I guessed that using p(X') = sup{θ0 ∈ Θ0} P(t(X(θ0,ω)) ≥ t(X')) would work. Paraphrasing jsteinhardt, if Θ0 = { θ01, ..., θ0n }, you could approximate p(X') using samples from t(X(θ01,ω)), ... t(X(θ01,ω)), but it's not clear what to do when Θ0 has infinite cardinality. I see two ways forward. One is approximating p(X') by doing the above computation over a finite subset of points in Θ0, chosen by gridding or at random. This should give an approximate lower bound on the p-value, since it might miss θ where the observed data look unexceptional. If the approximate p-value leads you to fail to reject the null, you can believe it; if it leads you to reject the null, you might be less sure and might want to continue trying more points in Θ0. Maybe this is what jsteinhardt means by saying it "doesn't terminate"? The other way forward might be to use features of t and Θ0, which we do have some control over, to simplify the expression sup{θ0 ∈ Θ0} P(t(X(θ0,ω)) ≥ c). Say, if t(X(θ,ω)) is convex in θ for any ω and Θ0 is a convex bounded polytope living in some Euclidean space, then the supremum only depends on how P(t(X(θ0,ω)) ≥ c) behaves at a finite number of points.

So yeah, things are far more complicated than I claimed and realize now working through it. But you can do sensible things even with a composite null.

Comment author: ChristianKl 29 July 2014 04:56:34PM 1 point [-]

I definitely see the humans as agents, whose preferences are morally relevant.

Agents make decisions. The moment you ignore decision making and only think in terms of preferences agentship is gone.

Comment author: polymathwannabe 29 July 2014 04:55:58PM 0 points [-]

I'd be glad to help.

Comment author: ChristianKl 29 July 2014 04:54:51PM *  0 points [-]

In this society, it is generally accepted that tickling is not something that requires consent.

How is it possible to not know whether or not tickling is moral but know that it doesn't require consent? That doesn't make any sense.

The whole idea of consent is that it's for the space between those actions where you know you can do them to anyone and those actions where you know you aren't allowed to do them to anyone.

Comment author: Stuart_Armstrong 29 July 2014 04:49:24PM *  0 points [-]

Controlling doesn't get rid of all the confounders (easiest one: people who eat organic care more about what they eat, almost by definition - how do you control for that?), and long term studies are very hard to do.

In response to Optimizing Sleep
Comment author: John_D 29 July 2014 04:48:30PM 0 points [-]

Some other ways to optimize sleep:

  • Metformin helps improve sleep, and a theorized mechanism is through improved glucose metabolism. This might also explain why exercise, which has a similar effect on glucose metabolism, improves sleep as well, and why deteriorating health worsens it.
  • Some blood pressure lowering drugs worsen sleep, but the possible mechanism is through melatonin suppression.

Source:

http://onlinelibrary.wiley.com/doi/10.1111/dme.12362/full

Comment author: Lumifer 29 July 2014 04:45:47PM 1 point [-]

because I don't see the point in adding complications that do not have relevance to the discussion

Your parable is flawed at the core because you made a basic category mistake. Flirting is not an action, not something one person does to another one. It is interaction, something two people do together.

Deciding that one person in that interaction controls the encounter and does things, while the other is just a passive receptacle to the extent that not even her consent is required, never mind active participation, is not a useful framework for looking at how men and women interact.

Comment author: mare-of-night 29 July 2014 04:43:50PM 0 points [-]

Thanks. I was getting them confused with Middle Earth trolls.

Comment author: tut 29 July 2014 04:39:35PM 0 points [-]

Didn't we have a special poll thread so that the RSS feed for the open thread would work?

Comment author: niceguyanon 29 July 2014 04:35:45PM 0 points [-]

I'll admit that the basis for my statement is from the seemingly lack of much negative user reports or studies that reported high negative reactions regarding safety, rather than experiments specifically demonstrating safety.

Comment author: John_Maxwell_IV 29 July 2014 04:29:45PM 0 points [-]

Maybe you bound your utility function so you treat all universes that produce 100 billion DALY's/year as identical. But then you learn that the galaxy can support way more than 100 billion humans. Left with a bounded utility function you're unable to make good decisions at a civilizational scale.

Comment author: Velorien 29 July 2014 04:25:41PM *  1 point [-]

This being Quirrell, while his reaction may indicate shock, it is also exactly how he would react if he did not have the artefact and/or believed it to be worthless in any case. There isn't enough information there to make any assumptions either way.

Also, in Chapter 90, Quirrell visibly fails to refute Harry's assumption:

"What of the Resurrection Stone of Cadmus Peverell, if it could be obtained for you?"

The boy shook his head. "I don't want an illusion of Hermione drawn from my memories. I want her to be able to live her life -" the boy's voice cracked. "I haven't decided yet on an object-level angle of attack. If I have to brute-force the problem by acquiring enough power and knowledge to just make it happen, I will."

Another pause.

"And to go about that," the man in the corner said, "you will use your favorite tool, science."

Comment author: hairyfigment 29 July 2014 04:23:03PM 0 points [-]

Also, we have what seems like a sufficient explanation for the rock - assuming Albus knew that the Defense Professor mentioned trolls while he was subtly encouraging Harry to learn the Killing Curse.

Comment author: gwern 29 July 2014 04:20:04PM 1 point [-]

nobody in medicine got a nobel prize for the smoking and lung cancer link

Who would they have given it to? When I ask myself for 'the man responsible for showing smoking & lung cancer are linked', nothing comes to mind, but I do remember that the claim had a long history going back to the Nazis among others and was the result of a long succession of correlational studies and animal experiments. Who in particular, who was still living when the connection became undeniable (no post-humous awards), deserves the Nobel for all that?

Comment author: solipsist 29 July 2014 04:09:35PM 0 points [-]

No, I am up for betting. If unforeseen plot developments make the resolution of the bet unclear, I want determining the winner to be casual and non-adversarial. I will not argue technicalities even if they could cause me to win.

What happens to the bet if HPMOR ends without it being revealed who Mr.Hat & Cloak happens to be?

That really depends. If the identity of Mr. Hat & Cloak remains unclear but Harry finds a black hat in Snape's trunk and a black cloak in Dumbledore's office, I would probably lose. If Harry figures out that Nicholas Flamel is really Baba Yaga and she's been hiding in Hogwarts all year, I would probably win. I'd be happy to let a mutual chosen third party arbitrate.

Comment author: gwern 29 July 2014 04:07:32PM 1 point [-]

Best done? Better than, say, decision trees or expert systems or Bayesian belief networks? Citation needed.

Comment author: William_Quixote 29 July 2014 04:06:36PM 1 point [-]

Hmm I hadn't thought the rock was the stone. That would be a great twist, but I doubt it because Dumbeldore said it was not magical to his knowledge when Harry asked him.

Also even if it is the stone, I don't think QQ knows this.

Comment author: gwern 29 July 2014 04:03:33PM 1 point [-]

It would take non-trivial circumlocutions to indicate that Harry doesn't know what the English word Horcrux means.

And what circumlocutions are those, exactly? Because in the passage quoted, I see a straightforward explanation of the ritual and a naming in English (with the unexpected result that he knew the English word's equivalent in Parseltongue too).

Comment author: E_Ransom 29 July 2014 04:03:26PM 0 points [-]

It's the "Saving the World - Progress Report" I mentioned elsewhere.

Comment author: gwern 29 July 2014 04:01:52PM 2 points [-]

What happens to the bet if HPMOR ends without it being revealed who Mr.Hat&Cloak happens to be?

I think Yudkowsky is enough of a non-asshole that if that's the case, he'll consent to say whether Baba Yaga had anything to do with Hat & Cloak. Remember, he endorses betting on beliefs.

Comment author: gwern 29 July 2014 03:59:32PM 5 points [-]

Why discuss it? Wouldn't it be better to A/B test which encourages new visitors to click on a link to another page?

View more: Next