All of drc500free's Comments + Replies

When gauging the strength of a prediction, it's important to view the inside view in the context of the outside view. For example, most medical studies that claim 95% confidence aren't replicable, so one shouldn't take the 95% confidence figures at face value.

This implies that the average prior for a medical study is below 5%. Does he make that point in the book? Obviously you shouldn't use a 95% test when your prior is that low, but I don't think most experimenters actually know why a 95% confidence level is used.

That doesn't lower the pre-study prior for hypotheses, it (in combination with reporting bias) reduces the likelihood ratio a reported study gives you for the reported hypothesis.

Respectfully disagree. The ability to cheaply test hypotheses allows researchers to be less discriminating. They can check a correlation on a whim. Or just check every possible combination of parameters simply because they can. And they do.

That is very different from selecting a hypothesis out of the space of all possible hypotheses because it's an intuitive extension of some ... (read more)

A fundamental problem seems to be that there is a lower prior for any given hypothesis, driven by the increased number of researchers, use of automation, and incentive to go hypothesis-fishing.

Wouldn't a more direct solution be to simply increase the significance threshold required in the field?

2CarlShulman
That doesn't lower the pre-study prior for hypotheses, it (in combination with reporting bias) reduces the likelihood ratio a reported study gives you for the reported hypothesis. Increasing the significance threshold would mean that adequately-powered honest studies would be much more expensive, but those willing to use questionable research practices could instead up the ante and use the QRPs more aggressively. That could actually make the published research literature worse.

Fair enough. It is definitely a bit of a turn-off to get downvotes with no comments, but every community has their common ways of communicating.

It definitely seems like Main is an announcement section for meetups, and Discussion is where discussions happen.

I'll check out some of Luke's articles!

For me, (1) and (2) are linked. I dabbled on LW, and presented some of my own ideas in the comments sections. None of them piqued anyone's interest, even if they were on pillar topics like FAI. I stopped being interested in LW, because:

  • EY stopped being as active, and no one with his clarity and perspective took his place as an article writer. I didn't see as many interesting ideas to talk about.
  • I wasn't able to engage others in the comment sections. I didn't see anyone I was on the same wavelength with to talk about the ideas I did see.

I don't just co... (read more)

7Andy_McKenzie
I agree that it can be difficult to get a start commenting on LW. The karma system favors regulars, because people skip over comments whose user names they do not recognize and/or have low votes, and this is a self-reinforcing process. Again I think experimenting, in this case with the way comments are presented, could be beneficial.
5faul_sname
I actually prefer Luke as an article writer. Eliezer is the better writer in terms of clarity and language skills, but Luke is a better researcher and brings up a lot of interesting ideas. I agree that Main isn't very active lately, but Discussion tends to have fairly good discussions.

Humans act within shared social and physical worlds, but tend to treat the latter as more "real" than the former. A danger of anthropomorphizing AI is that we assume that it will have the same perceptions of reality, and that it needs to "escape" into the physical world to optimize its heuristics. This seems odd, since a superintelligent AI that we need to be concerned about would have its roots in social world heuristics.

In trying to avoid anthrophomorphizing algorithms, we tend to under-estimate how difficult movement and action in p... (read more)

This came off as Meta-Contrarian Intellectual Hipster to me.

Many religions are highly reflective, debating what actions adherents should follow to achieve ethical ideals and reach a moral state of being. Zen Buddhism debates paths to enlightenment for universal understanding and emotional control. Judaism debates ways of giving Tzedakah to best provide immediate relief, encourage self improvement, and minimize shame. Hinduism debates various moral causes and their karmic effects. Examining any of these highly reflective religions would at least address t... (read more)

2prase
How this all relates to the comment you are replying to?

Some Examples:

Temple Judaism - Moral Development

While emotionless ethical codes tend to be ineffective, morals can and have been engineered. This is done by careful manipulation of the binding layers.

The "Ten Commandments" by itself is a prescriptive set of Ethics. The story of "Moses bringing the Ten Commandments" is a binding mechanism for that set of tenets, including an appeal to emotion (fear of God's wrath, as demonstrated in the story). Additional stories highlight each commandment, binding them with references to positive and ne... (read more)

I think there are two steps to morality engineering, either of which can fail:

  1. Develop an ethical code through deliberate reflection, that is better than existing values.
  2. Bind that code into the active moral code.

You say neither has happened; I disagree on both, but I'll limit this post to the second question on "binding." I use the following definitions - they may not be correct or universal, but they should be internally consistent:

  • Value System: A collection of memes to do with decision-making, which provide better overall utility than in
... (read more)
0drc500free
Some Examples: Temple Judaism - Moral Development While emotionless ethical codes tend to be ineffective, morals can and have been engineered. This is done by careful manipulation of the binding layers. The "Ten Commandments" by itself is a prescriptive set of Ethics. The story of "Moses bringing the Ten Commandments" is a binding mechanism for that set of tenets, including an appeal to emotion (fear of God's wrath, as demonstrated in the story). Additional stories highlight each commandment, binding them with references to positive and negative emotions. The Torah as a revered source of stories packages these stories and adds another layer of binding with meta-stories about its own origin. The document is considered a holy and perfect source, with rituals for precisely copying, using, and destroying the physical scrolls. Several covenants between God and Man are detailed in the scrolls and provide emotionally-backed reciprocity. The stories are interwoven with sacrificial acts, including a ritualized and bloody sacrifice of the first born son with a lamb as a proxy, which hits primal communal and animal emotions. Outside of the text of the Torah, which contains stories and meta-stories, a set of rites and rituals related to the Torah itself increases emotional impact and exposure. This is a deliberate act of engineering by the Deuteronomist editors, who had to amalgate stories from multiple cultures (at minimum a nomadic, sheparding culture and an agrarian crop-based culture), to create the Temple religion. Rabbinical Judaism - Moral Engineering The destruction of the Second Temple was disasterous for the Temple-based moral system. Despite early use of writing and a fairly advanced scholarly culture, the reliance on a specific physical location had prevented any real territorial expansion. Significant parts of the moral code were supported by visceral sacrifice at the Temple for both internal consistency and emotional binding. Only two cultural branches seem

I think there are two steps to morality engineering, either of which can fail:

  1. Develop an ethical code through deliberate reflection, that is better than existing values.
  2. Bind that code into the active moral code.

You say neither has happened; I disagree on both, but I'll limit this post to the second question on "binding." I use the following definitions - they may not be correct or universal, but they should be internally consistent:

  • Value System: A collection of memes to do with decision-making, which provide better overall utility than in
... (read more)
[This comment is no longer endorsed by its author]Reply

He emigrated to Israel in 1948 with a wife, two kids, and no money. He worked as a day laborer, claiming various construction skills to whoever pulled up and asked. One time he claimed he was a plumber in the old country, and spent two days installing an outdoor toilet. He finally saved up enough to buy a small grocery, so that he could run his own business. He walked out back after buying the place to find - the outhouse he had built years before.

He was definitely a badass, but the cancer was pretty far along by the time I knew him and I didn't speak Hebrew.

I think that, for many centuries, the Ashkenazi environment rewarded establishing a rigid social structure that studies and followed strict rules (preventing assimilation), but selected very strongly for individuals that could step outside the status quo at the right time. I can see how that would lead to Nobel prize winners.

Given the time scale involved, it doesn't seem like genetic selection could change more than how well you integrate successful memes. Some anecdotes from my own genealogy about relevant selection pressures:

  • Marriages were usually ar

... (read more)
9Bill_McGrath
Your grandfather sounds like a badass.

Some data points: IQ (age 7, 14, 20) = ~145-150 S-B SAT (age 16) - 1590 = ~150 S-B iqtest.dk (age 29) = 133 S-B sifter.org/iqtest (age 29) = 139 S-B (159 euro scale)

I don't use my spacial skills in my daily work they way I used to use them in my daily school work, and both online tests seem to measure only that.

I found the second test much more difficult - there wasn't enough information to derive the exact missing item, so you had to choose things that could be explained with the simplest/least rules. There were some where I disagreed that the correc... (read more)

That's how I felt. There is such thing as a personal moral code or system, and we can examine what happens to groups of people who are running various types and mixtures. We can try to determine which moral memes have the best outcomes, and are most likely to spread and be executed closely, and we can try to follow those codes.

Maybe that's pragmatic ethics, but the way morality is used in the survey implies that I'd believe in a single correct way of executing morality at the individual, day-to-day level. It's like asking whether I believe in being a carn... (read more)

Right-o. This can make it very confusing to compare wages between countries.

The actual cost to the employer, assuming they are providing no benefits, is your stated wage plus 7.65% for most income levels. The amount the employee gets is the employer cost, minus 7.65% (down to the stated wage), minus another 7.65% (the employee contribution), minus any local, state, and federal income taxes.

The tax band you are in is based on your adjusted gross income, but everyone gets to knock at least $5800 off for the standard deduction so it's not even your stated wage.

In general, MIT's registration policies are "we'll provide the rope, try not to hang yourself." On the flip side, it's nearly impossible to fail out.

Easy enough that it can't really distinguish 2 SDs from 3 SDs at the top end.

Though it's possible that it's already an SD above the population mean to begin with since it's only college grads. I don't think these researchers are looking for a very precise cutoff.

It is a good start, and it's becoming more critical as normal people have to sift through more and more orphaned factoids each day. When I was going through school, the most important question to ask was supposed to be "Why?" It's becoming more apparent to me that a more useful question to teach students is "How do you know?"

That can be covered in some depth in an IB class, but just being in the habit of asking that from the age of 5 is going to do more good than a structured curriculum once you're 16.

I also don't understand why the social security is counted as a negative percentage, and the retirement is counted as positive. If you subtract social security, you're counting your salary without that retirement contribution. If you add superannuation, you're counting your salary plus a retirement contribution. You can do one or the other, but not both.

There's also the simple fact that if a 9% contribution is mandatory, your stated wage will be 8% less to cover it. Just like your stated wage with social security is 7% less to cover the employer's contribution.

0Strange7
The amount the employer is willing to pay includes the employer's contribution to your mandatory retirement account, yes, but in Australia the amount they're claiming to pay you does not include the 9%, whereas in the US it traditionally does include the 7%,

Believe me, I know that high intelligence can skew a professional's diagnosis. But the underlying disorder is still the same and still treatable with essentially the same methods. You have to shop around a bit anyway to find someone you can work with, and even more so if you are high functioning and cope well.

There's no reason you can't do things traditionally as a baseline, and then decide how to proceed; mania is a terrible place to make a decision from.

I may be reading between the lines too much, but I get the sense that you're not diagnosed by a psychiatrist, or undergoing treatment. If that's the case, this might not be the exact area to try to outdo the professionals.

Professionals aren't allowed to optimize for their patient's intelligence and productivity, they're only allowed to prescribe medicines to treat the conditions listed in the DSM, which, sadly, does not recognize the lack of superhuman productivity as a disease.

Thank you. They're still relevant for the topics they cover... good background to see how much of the site is covered by sequences.

Is there a generic form of that for any nth derivative?

1JGWeissman
Sum over integers k from 1 to n of A(k)*e^(e^(2*i*pi/k)*x) is its own nth derivative, for all A.
1Eugine_Nier
Yes.

Well, that they are the family of solutions, allowing for various transformations.

*-Disclaimer, I haven't looked at a differential equation in 6 years.

Hello, My name is Dave Coleman. I was raised Atheist Jewish, and have identified as a rationalist my whole life. Browsing through the sequences, I realized I had failed to recognize some deeply ingrained biases.

I value making myself and others happy. Which others, and how happy, is something I've always struggled with. I used to have a framework with Jewish ethics, but I'm realizing that those are only clear in comparison to Christian ethics. Much of what I learned and considered was about how to make the Torah and Talmud relevant to modern, atheistic life... (read more)

1Perplexed
Following up to EY's comment: e^x is its own second derivative too. There are two functions that are their own second derivative, and four which are their own fourth derivative. Cool! So what are the other two (out of three) functions that are their own third derivative? What does their graph look like? And does all this have anything to do with Laplace transforms? Does a sufficiently smooth function have a 1.5th derivative? Yes, welcome to LW.
4Eugine_Nier
Causality doesn't have much meaning when applied to mathematics.
6Eliezer Yudkowsky
e^-x is its own second derivative. sin(x) is its own fourth derivative (note relation to e^ix). And welcome to LW! (he said)
0JGWeissman
Of course, you mean they are the only solutions that satisfy certain initial conditions.

You can import/export from a bookmark file. I'm not sure whether that's less tedious.

Somewhat relevant is the Gervais Principle. This Principle is based on the idea that a corporate pyramid is topped by "sociopaths," has "losers" as a foundation, and a culture of ladder-climbing "clueless" between the two:

Sociopaths, in their own best interests, knowingly promote over-performing losers into middle-management, groom under-performing losers into sociopaths, and leave the average bare-minimum-effort losers to fend for themselves.

It's not a very rigorously investigate principle, though it matches well with my professional experiences.

It's not clear to me how you're mapping this problem to the trolley problem.

To me the Trolley problem is largely about how much you're willing to only look at end-states. In the trolley problem you have two scenarios with two options, leaving you with identical end states. Same goes for the House Elf problem, assuming that it is in the wizard's power to create more human-like desires.

The main difference between the cases that I see in the Trolley problems are "to what extent is the person you're killing already in danger?" Being already on a ... (read more)

0AdeleneDawner
Ah-ha. Okay. I hadn't thought of the trolley problem in those terms before. It's not very relevant to how I'm thinking, though; I'm thinking in terms of what actions are acceptable from a given starting point, not what end states are acceptable. As to house elves: I don't consider humanike values to be intrinsically better than other values in the relevant sense - I disagree with Clippy about the ideal state of the world, and am likely to come into conflict with em in relevant cases, but if the world were arranged in such a way that beings with clippylike values could exist without being in conflict with beings with other values, I would have no objection to said being existing, and that's basically the case with house elves. (And I don't think it's intrinsically wrong for Clippy to exist, just problematic enough that there are reasonable objections.) I would consider causing house elves to have humanlike values equally problematic as causing humans to have house-elf-like values, regardless of whether the house elves were human to begin with, assuming that house elves are satisfied with their values and do not actively want to have humanlike values. Two wrongs don't make a right.

My lower brain agrees with you. My upper brain asks if this is just a trolley problem that puts a high moral value on non-intervention.

Scenario A: Option 1: Create house elves out of nothingness, wire them to enjoy doing chores. Option 2: Create house elves out of nothingness, wire them to enjoy human desires.

Scenario B: Option 1: Take existing house elves with human desires, wire them to enjoy doing chores. Option 2: Leave existing house elves with human desires alone

Is there a non-trolley explanation for why it is immoral to rewire a normal elf, but not... (read more)

0AdeleneDawner
The problem with rewiring someone against their will has to do with the second issue I mentioned, not the first one - changing their preferences and their utility function. If you're creating something from scratch, I don't see how that can be an issue without arbitrarily privileging some set of values as 'correct' - if you're creating something from scratch, there are no pre-existing values for the new values to be in conflict with. (The first issue doesn't seem to raise the same problems: I think I would consider it okay, or at least 'questionable' rather than 'clearly bad', to re-wire someone to enjoy doing things that they would be doing anyway to achieve their own goals, if you were sufficiently sure that you actually understood their goals; however, I don't think that humans can be sufficiently sure of other humans' goals for that.) It's not clear to me how you're mapping this problem to the trolley problem. This is probably because I have some personal stuff going on and am not in very good cognitive shape, but regardless of the cause, if you want to talk about it in those terms I'd appreciate a clearer explanation.

Instead of creating them from scratch, would it be immoral to take a species that hated chores and wirehead them to enjoy chores?

0[anonymous]
I think it is. I mentioned this possiblity here: However I think the tipping point starts way before "not a single line of derived code":

The house elves seem to be a bit of a shout out to the Ameglian Major Cow. In that case a mind was wire-headed to enjoy something that was pretty clearly bad for it. Arthur had a problem with this, but they argued that if you were going to eat a Cow, it was more moral to wire it to enjoy being eaten.

If you accept that doing chores is just on a continuum with being tortured or eaten, which EY might, then the question is the same as whether it's Evil to wirehead someone into enjoying being tortured or eaten.

Edit: For clarity, I don't think I agree with the ... (read more)

5AdeleneDawner
I'm not sure if doing chores in and of itself can be viewed as on a continuum with being tortured, for the purposes of this exercise. Being forced to do chores is considered bad for two reasons (as far as I know): Most people find doing chores to be intrinsically not enjoyable, and most people have other goals that they'd prefer to spend their time pursuing. Being tortured matches at least the first part of that description, and usually matches the second part as well. But for house elves, doing chores is not intrinsically not enjoyable, and it appears that they generally don't have other significant goals to pursue - and this is their native state; if you create a house elf from nothing, rather than modifying another creature to be house-elf-like, there's no 'rewiring' involved at all. (And the OP made a point of specifying that that's the case, since it is obviously problematic to rewire a creature in a way that's opposed to its values.) It may be useful to also consider the case of masochistic people, for whom things like being whipped are enjoyable: Given that some people seem to just naturally be that way - it's not caused by trauma or anything, in most cases, unless I've really missed something in my research - is it somehow problematic that they exist?
1[anonymous]
But if the creatures enjoy their situation and manage to self replicate or are immortal isn't the use of their labour more like a form of parasitism on the species? One could argue parasitism is wrong but the act of creating them vunreable for parasitism seems neutral as long as they are capable of survival despite it.

Morality is in some ways a harder problem than friendly AI. On the plus side, humans that don't control nuclear weapons aren't that powerful. On the minus side, morality has to run at the level of 7 billion single instances of a person who may have bad information.

So it needs to have heuristics that are robust against incomplete information. There's definitely an evolutionary just-so story about the penalty of publically committing to a risky action. But even without the evolutionary social risk, there is a moral risk to permitting an interventionist murd... (read more)

Thank you! That information is very helpful.

This seems like a good audience to solve a tip-of-my-brain problem. I read something in the last year about subconscious mirroring of gestures during conversation. The discussion was about a researcher filming a family (mother, father, child) having a conversation, and analyzing a 3 second clip in slow motion for several months. The researcher noted an almost instantaneous mirroring of the speaker's micro-gestures in the listeners.

I think that I've tracked the original researcher down to Jay Haley, though unfortunately the articles are behind a pay wall: ... (read more)

2Unnamed
I'm not sure where you saw that reference, but I do know that there has been a fair amount of research on nonverbal mirroring, which is typically called "mimicry" (that's the keyword I'd search for). Tanya Chartrand is one of the main researchers doing this work; here is her CV (pdf) which lists her published work (most of the relevant ones have "mimicry" in the title). Her 1999 paper with John Bargh (pdf) is probably the most prominent paper on the topic, and here (pdf) is a more recent paper which I picked because it's available for free & it starts with a decent summary of existing research on the topic. The kind of thing that you're interested in - the development of thought/emotion/behavior patterns that span 2+ people in a relationship - is also an active topic of research. I don't know as much about this area, but I do know that Mischel & Shoda have created one model of it. Here is the abstract of one paper of theirs which seems particularly relevant but doesn't seem to be free online, and here (pdf) is another paper which is available for free.

I don't think that the sunk cost consideration is a fallacy in this case.

As far as life can be said to "begin" anywhen, it begins at conception.

You think womens' rights trump kids' rights or the other way round, okay.

http://lesswrong.com/lw/od/37_ways_that_words_can_be_wrong/ http://wiki.lesswrong.com/wiki/Politics_is_the_Mind-Killer

You're arguing definitions, claiming that your definition of "life" is universal, and using an ambigious definition of "kid" to pull emotional strings. I think we all agree on the anticipated outcomes of a pregnancy. Given how emotional "life" and "kid... (read more)

Someone linked to the paperclip maximizer wiki page in a post on reddit.

The Exodus meme is the story of how Israelites became Jews after escaping Egypt. The tribe is persecuted by the world as a whole, and can take retribution against any group in retaliation. After the Israelites get the ten commandments, they proceed to invade, rape, murder, and enslave an innocent population that had nothing to do with what the Egyptions had done, all with God's blessing and participation.

There is also the bit about Israel being a promised land. Even knowing that there was no God my whole life, it's difficult for me to think about the poli... (read more)

Honestly, whenever I read through Omega-related posts, I feel like we might be trying to re-invent Calvinist predestination theology. These sort of paradoxes of free will have been hashed out in monastaries and seminaries for almost two millenia, by intelligent people who were as rational as their understanding of the universe allowed. I wonder if even a theology student has something to contribute here.

3Louie
Hey, you're relatively new. How did you end up here?

And then I thought, "No, wait, God didn't make these maple trees. They grew out of the ground, from maple seeds, which came from other maple trees, which evolved from other kinds of trees over millions of years. These leaves are still yellow. They are still beautiful. I'm going to drop the God part and just focus on the color of the leaves, which I know is real." In other words: change your focus to what you know is real.

I was raised as an atheist Jew, but didn't attempt to apply consistancy to my beliefs until recently. Thankfulness in the fo... (read more)

0NancyLebovitz
Toxic Exodus meme?

Humans are really bad at acting according to an ideal, random strategy. See:

Wine in front of me - A term from the game Mafia, describing infinitely layered thinking, based on recursive predictions of an opponent's predictions.

Yomi (mind reading) - David Sirlin discusses the ability to predict non-ideal trends in an opponent's behavior, and how infinite layers of predictions actually repeat and only require a limited and finite set to consider.

Having just stumbled across LW yesterday, I've been gorging myself on rationality and discovering that I have a lot of cruft in my thought process, but I have to disagree with you on this.

“Meaning” and “mysterious” don’t apply to reality, they only apply to maps of the terrain reality. Self-awareness itself is what allows a pattern/agent/model to preserve itself in the face of entropy and competitors, making it “meaningful” to an observer of the agent/model that is trying to understand how it will operate. Being self-aware of the self-awareness (i.e. mapp... (read more)