Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: bogdanb 21 February 2013 07:11:29AM 2 points [-]

My posture improved significantly after I started doing climbing (specifically, indoor bouldering). This is of course a single data point, but "it stands to reason" that it should work at least for those people who come to like it.

Physical activity in general should improve posture (see Nancy's post), but as far as I can tell bouldering should be very effective at doing this:

First, because it requires you to perform a lot of varied movements in unusual equilibrium positions (basically, hanging and stretching at all sorts of angles), which few sports do (perhaps some kinds of yoga would also do that). At the beginning it's mostly the fingers and fore-arms that will get tired, but after a few sessions (depending on your starting physical condition) you'll start feeling tired in muscles you didn't know you had.

Second (and, in my case, the most important) it's really fun. I tried all sorts of activities, from just "going to the gym" to swiming and jogging (all of which would help if done regularly), but I just couldn't keep motivated. With all of those I just get bored and my mind keeps focusing on how tired I am. Since I basically get only negative reinforcement, I stop going to those activities. Some team sports I could do, because the friendly competition and banter help me having fun, but it's pretty much impossible to get a group doing them regularly. In contrast, climbing awakes the child in me, and you can do indoors bouldering by yourself (assuming you have access to a suitable gym). I always badger friends into coming with me, since it's even more fun doing it with others (you have something to focus on while you're resting between problems), but I still have fun going by myself. (There are always much more advanced climbers around, and I find it awesome rather than discouraging to watch their moves, perhaps because it's not a competition.)

In my case, after a few weeks I simply noticed that I was standing straighter without any conscious effort to do so.


Actualy, I think the main idea is not to pick a sport that's specifically better than others for posture. Just try them all until you find one you like enough to do regularly.

Comment author: buybuydandavis 21 February 2013 07:01:45AM *  7 points [-]

For some context for you, while you held the first two paragraphs to be self evident, they seemed to wrong, or not even wrong, in every claim to me.

Does a government care? That's anthropomorphizing an organization of people. Further, the organization may produce results that none of the citizens desire (see 1984).

Another reasonable claim derives from utilitarianism: citizens’ wants should count equally.

The traditional American constitutional view is that no one's wants count for anything to the government. The government is there to protect your rights, not satisfy your wants.

Others are busy criticizing your interpretations of utilitarianism. I think your priors about the beliefs of others are mistaken again and again.

Comment author: Matt_Simpson 21 February 2013 06:56:44AM 0 points [-]

The steel man is hypothesis testing as a theory-of-induction free way of doing induction.

Comment author: Manfred 21 February 2013 06:53:48AM *  1 point [-]

I hope you informed them about the fact that real people often don't satisfy condition (1). Instead they write things like "The patients in the treatment group showed dramatically decreased risk of stroke (p<0.01), indicating the efficacy of Progenitorivox."

it does seem to be a useful steel man.

Find a steel man in there somewhere, and then we can talk about the properties of said steel man.

Comment author: Matt_Simpson 21 February 2013 06:48:37AM *  3 points [-]

Nope. This was a good point by Jaynes. The truth may not exist in your hypothesis space. It may be (and often is) something you haven't conceived of.

Yes, the implicit assumption here is that the model is true.

Low likelihood of data under a hypothesis in no way implies rejection of that hypothesis.

\6. Therefore the alternative hypothesis is true.

Without also calculating the likelihood under the alternative hypothesis (it may be less), this is unjustified as well.

I don't think you understood my point. I'm avoiding claiming any inductive theory is correct - including Bayes' - and trying to show how hypothesis testing may be a way to do induction while simultaneously being agnostic about the correct theory. That Bayesian theory rejects certain steps of the hypothesis testing process is irrelevant to my point (and if you read closely, you'll see that I acknowledge it anyway).

Comment author: buybuydandavis 21 February 2013 06:37:45AM *  7 points [-]
  1. Either the null hypothesis or the alternative hypothesis is true.

Nope. This was a good point by Jaynes. The truth may not exist in your hypothesis space. It may be (and often is) something you haven't conceived of.

\4. Therefore under the null hypothesis, our sample is extremely unlikely.
\5. Therefore the null hypothesis is false.

Low likelihood of data under a hypothesis in no way implies rejection of that hypothesis.

\6. Therefore the alternative hypothesis is true.

Without also calculating the likelihood under the alternative hypothesis (it may be less), this is unjustified as well.

Comment author: shminux 21 February 2013 06:27:17AM -1 points [-]

You can certainly delete a post if you think it does not have the value you expected. Any replies will also be deleted. As for the literature, if you don't include links, how do we know what you agree or disagree with, and what you are building on?

Comment author: jooyous 21 February 2013 06:20:39AM 1 point [-]

I feel like that's a bit exaggerated, because an angry person will still remember themselves yelling and maybe throwing things. Once they've called down, they might still be inclined to argue that what they did was correct and justified, but they won't have trouble admitting they did it. If a person doesn't remember having the experience of yelling and throwing things, they won't know anything about their internal state at the time it happened. So people telling them something happened is evidence that it did, but it was the ... conscious experience of someone else? (Blargh, fuzzy wording.)

Comment author: faul_sname 21 February 2013 06:16:59AM 3 points [-]

Given a small number of mutually exclusive choices (as in a multiple choice or true/false exam), how do you determine the one that the creator of the question intended without knowing enough about the specific subject?

  • In general, the second highest numerical answer is right about half the time.
  • When there are all-of-the-above or none-of-the-above questions, the person writing the test will choose the all/none-of-the-above choice either very rarely or above 50% of the time. Thus, if you know that the answer to one all-of-the-above question is "all of the above" and have no clue on the current question, "all of the above" is probably your best guess.
  • Surprisingly frequently, wrong answers don't fit grammatically into fill-in-the-blanks type multiple choice. If it's not grammatically correct, it's probably not the right answer.
  • Read the entire test before you start answering questions. Frequently, answers to early questions are in later questions. Here it's a good idea to learn to read at least twice as quickly as the average student so as to be able to do this with any fair time limit.
  • If two options are the same, they're both wrong. If one is the negation of the other, one of those two is correct.
  • Always guess. "Guessing penalties" don't actually penalize you for guessing, they're designed to offset random guessing on average. Your guess is probably better than random.
  • Unless you can point out the specific way in which the answer you gave first was wrong, don't change it. It's very hard to think "this answer is probably wrong" without thinking "this alternative answer is probably right", and so while your current answer is likely wrong, your next choice is likely to be worse.
  • If an answer contains qualifiers (may, might, sometimes), it's more likely to be correct.
  • One strategy I have heard of but not personally used is to treat multiple-choice questions as a series of true-false questions and pick the "most true" answer.

You're not going to go from random chance to an A with these techniques, but you might go from random chance to a 70% or from a 70% to an 85% (my ballpark estimate is that half of the errors the average person makes on a test are avoidable with the use of heuristics like these).

Comment author: Eugine_Nier 21 February 2013 06:15:58AM 1 point [-]

I would argue that being sensitive is something one has to at least partially overcome in order to be rational, i.e., one has to be able to ignore the social pressure to conform to popular irrational beliefs.

Comment author: Kawoomba 21 February 2013 06:13:22AM -2 points [-]

Oh look, our Emmanuel Goldstein is back. Arrrrrr!

Comment author: Eugine_Nier 21 February 2013 06:09:51AM 1 point [-]

I seem to recall reading on Yvain's blog that he's also hyper-sensitive to negative criticism

In that case he's good about not showing it.

Comment author: Armok_GoB 21 February 2013 06:06:29AM *  2 points [-]

Utilitarianism as used on LW typically refers to something closer to "It is in my personal nature to act as to maximize the average amount of [list of types of computation] minus [list of other types of computation] of entities that [qualification criteria]" where the contents of the brackets vary from person to person and are extremely complex, not "everyone objectively should maximize 'happiness' ".

Note that's "closer" not close; this is a one-sentence oversimplification.

Comment author: Eugine_Nier 21 February 2013 06:04:41AM *  3 points [-]

Different people have different ideas about what constitutes "being a dick" and I was wondering what you mean by it.

Comment author: Eugine_Nier 21 February 2013 06:00:20AM *  1 point [-]

The general idea that women not being attracted to men who are attracted to them is just some arbitrary wrongness in the universe

Well, if they were attracted to the men attracted to them this would increase total utility. One of the less pleasant implications of utilitarianism.

On the other hand, it's interesting that people are willing to swallow pushing people in front of trolleys, but not swallow the above. Probably related to this.

In response to comment by gwern on LW Women: LW Online
Comment author: VincentYu 21 February 2013 05:49:46AM 0 points [-]

One logistic regression has a 'model 7' taking into account many factors where going from 1300 to 1600 goes from an odds ratio of 1.907 to 10.381; so if I'm interpreting this right, an extra 10pts on your total SAT is worth an odds ratio of ((10.381 - 1.907) / (1600-1300)) * 10 + 1 = 1.282.

Aren't odds ratios multiplicative? It also seems to me that we should take the center of the SAT score bins to avoid an off-by-one bin width bias, so (10.381 / 1.907) ^ (10 / (1550 - 1350)) = 1.088. (Or compute additively with log-odds.)

As Vaniver mentioned, this estimate varies across the SAT score bins. If we look only at the top two SAT bins in Model 7: (10.381 / 4.062) ^ (10 / (1550 - 1450)) = 1.098.

Note that within the logistic model, they binned their SAT score data and regressed on them as dichotomous indicator variables, instead of using the raw scores and doing polynomial/nonparametric regression (I presume they did this to simplify their work because all other predictor variables are dichotomous).

Comment author: Mestroyer 21 February 2013 05:43:16AM 0 points [-]

I put in a text warning before the spoilers, this should work I think. Rot13 is annoying in my opinion.

Comment author: Elithrion 21 February 2013 05:40:00AM 2 points [-]

Worse than that, I think it breaks down even without removing memory formation. If someone takes drugs regularly which make them act very differently, it's probably best to model them as two people (or at least two sets of behaviours and reactions attached to one person) even if they remembers both sides at all times. On a less drug-related level, for most people, aroused!person acts quite differently from unaroused!person (and while I mainly meant sexual arousal, it's true for anger and other strong emotions as well). Which is just saying that a person acts differently when experiencing different emotions/mental states, which we really already know. It's definitely more salient with drugs, though.

Comment author: Pentashagon 21 February 2013 05:32:19AM 0 points [-]

It is considered a good form to summarize your main point in a reasonably plain English, even for a complicated research paper, let alone a forum post. Give it a try, you might gain some broader audience.

Is it generally better to edit an existing post or just retract it? At this point after reading the replies it's obvious that I missed some important facts and there might not be much of interest left in the post.

Another standard step is looking up the existing literature and referencing it. The issues of Pascal Mugging, unbounded utility functions and largest computable integers have been discussed here recently, are none of them relevant to your problem?

Is there a better way to find the topical posts on LW than the google site search? I tried to find all the posts about pascal's mugging but apparently missed some of the more recent ones about bounded utility. That may have just been an oversight on my part, or using the wrong search terms. I also try to follow the list of recent posts and thought I had read all the posts about it since sometime last summer.

Comment author: Eugine_Nier 21 February 2013 05:28:43AM 0 points [-]

Technique C already handles this: 10+5/2 = 7.5. 5+5/2 = 5. So clearly going from 10->5 is bad, but having both of them be at 7.5 would be better, and having both of them at 10 would be even better still.

Of course technique C doesn't address the weasel example.

For technique B, yes, you will get results that say power imbalances are unfair and should be destroyed.

When did we switch from talking about utility to talking about power? I agree power imbalances are dangerous; however, this fact doesn't seem to bear on the weasel example.

Comment author: savageorange 21 February 2013 05:19:35AM *  2 points [-]

I always appreciate more carefully thought out material about dealing with our transience as agents. I suggest to improve the formatting of the list at the end, as that is key information. Formatting it as a classic numbered/lettered list seems appropriate, i.e.

(A) Short enough that you will actually do it.

(B) Short enough that the person at the end, doing it, will still be you in the significant ways.

(C) Having enough emotional feedback that your motivation won't be capsized before the end. and

(D) Such that others not only can, but likely will take up the project after you abandon it in case you miscalculated when you'd change, or a change occured before expected time.

D seems related to proofing projects against partially-hostile agents (including ourselves). I'm interested in expanding on this. I suspect the same strategy employed by diplomats has a large part to play: Cultivate valuing human universal values, and ground the project solidly on them. Keep other people well informed so they can be an extra set of eyes to possibly notice changes in direction.

I also think you probably want to change these:

an ontology of selves which had variegated sized selves

...

we now have a glowing community of people hopefully becoming better at becoming better, and making the world a better place in variegated ways.

While that's artistically amusing, I believe you mean 'various' in both those cases (or 'various' in the first and 'varied' in the second).

Comment author: Pentashagon 21 February 2013 05:18:26AM *  0 points [-]

The standard solution is to have a bounded utility function, and it seems like it fully solves this reformulation as well. There may also be some other solutions that work for this, but I'm sufficiently unsure about all the notation that I'm not very confident in them.

You're right. Somehow I completely missed Pascal's Mugging for bounded utility functions

EDIT: I obviously didn't miss it, because I commented there. What I did do was not understand it, which is quite a bit worse.

Comment author: jooyous 21 February 2013 05:11:38AM *  1 point [-]

There's also the secondary question of deciding to what extent being a bad person while drunk suggests that they're also a (less) bad person while not drunk and have maybe just been hiding it well.

But then if they're a good person while they're sober but they spend a lot of their time drunk, then they're really a weighted average of two people that computes more skewed toward their drunk self (who can't really coherently answer questions about themselves) and their sober self can't remember what their drunk self did so that self can't either and omgarghblarghcomplicated. I generally do just short-circuit all of these computations the way you describe and don't hang out with people like this, but I have one friend whom I've known forever who's generally okay but sometimes acts really weird. And I can't tell when he's drunk, so he slowly acts a little weird over a long period of time before revealing that he's been drinking all day and then I just feel like I don't know who I'm talking to and whether he'll be that person the next time I see him.

Anyways, I'm not too interested in the specifics of the instrumental side? I was just mainly wondering if the model of "conscious" "persons" breaks down really quickly once you introduce intoxicants that mess with memory formation. It kinda seems like it, huh?

Comment author: DanielLC 21 February 2013 04:52:46AM -1 points [-]

Most people would strongly desire to not be modified in such a fashion.

Yes, but only until they're modified. The desire fulfillment of their future selves will outweigh the desire unfulfillment of their present selves, resulting in a net increase in desire fulfillment.

Comment author: lukeprog 21 February 2013 04:40:10AM 0 points [-]
Comment author: lukeprog 21 February 2013 04:39:33AM -1 points [-]

Thanks!

Comment author: Elithrion 21 February 2013 04:16:15AM 1 point [-]

I always have issues wrapping my head around how to deal with morality or responsibility-related issues when dealing with memory formation. Like really drunk people that say mean things and don't remember them after -- was that really them being mean? Whatever that means.

I think the best way to look at it is pragmatic (or instrumental or whatever you want to call it) - figure out what behaviour you'd like them to exhibit (e.g. be less mean, generally avoid destructive behaviours), decide whether they can influence it (probably yes, at least by drinking less), and then influence them accordingly. Which is a roundabout way of saying that you should tell them they suck when drunk and you're unhappy with them so hopefully they'll act better next time (or get less drunk) and that legally they should have pretty much the same responsibilities. There's also the secondary question of deciding to what extent being a bad person while drunk suggests that they're also a (less) bad person while not drunk and have maybe just been hiding it well. I tend to think that it probably doesn't (the actual evidence of what we know about them when they're not drunk being more relevant), but I'm not really sure.

Comment author: Qiaochu_Yuan 21 February 2013 04:05:39AM 0 points [-]

You don't want to think like just any old supervillain. Most of them have systematic flaws in their behavior too.

Sure, in the same way that if I wrote a post called "think like a scientist" about how you should test your hypotheses it would be reasonable to respond "you don't want to think like just any old scientist..."

Comment author: AspiringRationalist 21 February 2013 04:00:38AM 1 point [-]

pre-school up to sixth form!

Given the international nature of the internet, it would be helpful to provide clarifying definitions for country-specific terms.

Comment author: drethelin 21 February 2013 03:58:45AM 2 points [-]

I think the innate nature of dating is such that if you want success stories your incentive is to optimize as much as possible for success and the failure rate and single population will take care of it itself

Comment author: AspiringRationalist 21 February 2013 03:55:26AM 0 points [-]

This steelman argument doesn't address the main issue with Pascal's mugging - anyone in real life who makes you such an offer is virtually certain to be lying or deluded.

Comment author: boredstudent 21 February 2013 03:54:09AM 2 points [-]
Comment author: lukeprog 21 February 2013 03:26:36AM 0 points [-]
Comment author: MaoShan 21 February 2013 03:26:18AM *  1 point [-]

You are correct. I was interpreting "saving the world" in this article to mean "saving the world [from supervillains]". (fixed in comment now)

Comment author: Elithrion 21 February 2013 03:18:01AM 0 points [-]

I really like this post, possibly because it lines up well with ideas I've been thinking recently myself.

One related interesting thing to consider (which you may or may not be planning to mention in the second post) is what exactly would a fully rational agent who acknowledges that her goals may change will do. For example, she might accept that the changes are appropriate, and then perhaps claim there is some underlying goal system that accounts for both present goals and changes in response to the environment (in which case, she could try to figure out what it is and see if that gets her any meaningful mileage). More interestingly, she may choose to go to war with her future self. She could set up a lot of hard precommitments that make deviating from the current goals really hard, deliberately avoid events that might change goals where possible (e.g. if single, avoid getting into a relationship), keep track of hormonal levels and try to keep them constant, artificially if necessary. And then she could consider the fact that maybe future self will end up deviating anyway (with a reduced probability), and then model the future self as having a backlash against current goals/fanaticism, and then try engaging in acausal trade with her future self. Maybe that will then lead to some more cooperative strategy. It's very tempting to say that after the negotiation and optimisations it all adds up to normality, but I'm not sure it actually does (also I'm not sure the negotiation would actually be viable in a standard way, since neither present nor future self can commit very well, and future self can't be modeled that well).

Also, there are a bunch of typoes/spelling mistakes scattered throughout (e.g. "exaustive") - you might want to run a spellchecker over the post.

Comment author: Spurlock 21 February 2013 02:58:43AM *  0 points [-]

Yeah you're right. I think part of what I was wondering was whether it does make sense to group those 2 things under one heading, or just how strongly they're correlated.

Now that you mention it, I seem to recall reading on Yvain's blog that he's also hyper-sensitive to negative criticism, so there's another data point for it not being tied all that strongly to gender.

Edit: Aforementioned Yvain blogpost

Comment author: Bugmaster 21 February 2013 02:51:57AM 8 points [-]

It seems almost self-evident that (barring foreign subjugation) a government will care about the wants of (some of) its citizens and nothing else: no other object of concern is plausible.

This is not at all "self-evident", unless you choose to interpret the sentence completely literally, which would render it nearly meaningless.

For example, the government of North Korea does indeed care about the wants of "some" of its citizens, where the number of such citizens is pretty close to 1.

Comment author: Ghatanathoah 21 February 2013 02:51:38AM *  1 point [-]

I think I would understand you better if you could break down the details of how forcing wireheading on a person harms their welfare.

Radically changing someone's preferences in such a manner is essentially the same as killing them and replacing them with someone else. Doing this is generally frowned upon. For instance, it is generally considered a good thing to abort a fetus to save the life of the mother, even by otherwise ardent pro-lifers. While people are expected to make some sacrifices to ensure the next generation of people is born, generally killing one person to create another is regarded as too steep a price to pay.

The major difference between the wireheading scenario, and the fetal alcohol syndrome scenario, is that the future disabled person is pretty much guaranteed to exist. By contrast, in the wireheading scenario, the wireheaded person will exist if and only if the current person is forced to be wireheaded.

So in the case of the pregnant mother who is drinking, she is valuing current preferences to the exclusion of future ones. You are correct to identify this wrong. Thwarting preferences that are separated in time is no different from thwarting ones separated in space.

However, in the case of not wireheading the person who refuses to be wireheaded, you are not valuing current preferences to the exclusion of future ones. If you choose to not force wireheading on a person you are not thwarting preferences that will exist in the future, because the wireheaded person will not exist in the future, as long as you don't forcibly wirehead the original person.

To go back to analogies, choosing to not wirehead someone who doesn't want to be wireheaded isn't equivalent to choosing to drink while pregnant. It is equivalent to choosing to not get pregnant in the first place because you want to drink, and know that drinking while pregnant is bad.

The wireheading scenario that would be equivalent to drinking while pregnant would be to forcibly modify a person who doesn't want to be wireheaded into someone who desperately wants to be wireheaded, and then refusing to wirehead them.

The whole thing about "welfare" wasn't totally essential to my point. I was just trying to emphasize that goodness comes from helping people (by satisfying their preferences). If you erase someone's original preferences and replace them with new ones you aren't helping them, you're killing them.

Comment author: Bugmaster 21 February 2013 02:43:06AM 2 points [-]

I find it difficult to pattern-match any of the characters to the classic hero template, especially when you compare them to traditional hero archetypes such as Green Lantern or someone like that. As I said, Dany is initially motivated by personal survival, with a dose of revenge fantasy on the side. Her actions are impressive, but hardly selfless. Her motivations do change as her character develops, at which point she begins to think in terms of social structures and armies -- again, as opposed to a more classical hero who would think in terms of beating up bad guys in person.

You are right about Jon being more of a heroic archetype, but even he ends up making several distinctly un-heroic choices that cause a lot of damage to the... well... not the "good guys", exactly; I guess you'd call them the "comparatively less bad guys".

Ned is probably the most heroic character in the entire story, which is why ur trgf xvyyrq bss engure dhvpxyl. Urebrf qba'g ynfg ybat va gur jbeyq bs NFbVnS.

Oh, and I am reasonably sure that the [quasi-]supernatural properties of any of these characters will have little, if anything, to do with their ultimate fates (other than in terms of PR). At least, this has been the pattern so far.

Comment author: Jabberslythe 21 February 2013 02:35:06AM 0 points [-]

No, though I admit it has felt like that for me at some points in my life. Even if I did, there are a bunch of reasons why that I would not trust that intuition

I like certain things and dislike certain things, and in a certain sense I would be mistaken if I were doing things that reliably caused me pain. That certain sense is that if I were better informed I would not take that action. If, however, I liked pain, I would still take that action, and so I would not be mistaken. I could go through the same process to explain why an sadist is not mistaken.

I do not know what else to say except that this is just an appeal to intuition, and that specific intuitions are worthless unless they are proven to reliably point towards the truth.

Comment author: dspeyer 21 February 2013 02:34:51AM 11 points [-]

It seems almost self-evident that (barring foreign subjugation) a government will care about the wants of (some of) its citizens and nothing else

That is not at all self-evident.

While moral realism is widely recognized as absurd

While some people believe this, it is not widely recognized.

These phrases should trigger warnings.

Comment author: jooyous 21 February 2013 02:31:05AM *  1 point [-]

Yeah, I wasn't trying to design a very ambitious experiment. I'm just not sure I can predict what I would say to that question if I were asleep. Could you get the other person to make you convince them that you're conscious if you say yes and have them report back what you say? I predict non-sequiturs!

I always have issues wrapping my head around how to deal with morality or responsibility-related issues when dealing with memory formation. Like really drunk people that say mean things and don't remember them after -- was that really them being mean? Whatever that means.

Comment author: Oligopsony 21 February 2013 02:14:39AM 2 points [-]

Just the opposite, in fact: Better Angels is about being deliberately ineffective.

Comment author: jkaufman 21 February 2013 02:09:40AM 0 points [-]

In my response to DanielLC I'm arguing against a kind of preference utilitarianism. In the main post I'm talking about how I'm not happy with either preference or hedonic utilitarianism. It sounds to me like you're proposing a new kind, "welfare utilitarianism", but while it's relatively clear to me how to evaluate "preference satisfaction" or "amount of joy and suffering" I don't fully see what "welfare" entails. I think I would understand you better if you could break down the details of how forcing wireheading on a person harms their welfare.

Comment author: shminux 21 February 2013 02:04:43AM 6 points [-]

The number of fallacies in this strawmanning of utilitarianism is only rivaled by the number of fallacies in your previous posts as metaphysicist about nonexistence of infinities.

Comment author: Oligopsony 21 February 2013 02:01:54AM 4 points [-]

Daenerys Targaryen (books): Though initially motivated solely by revenge and personal survival, she stops long enough to overturn several existing social orders in order to improve the average quality of life. An arguable example, since no one in ASoIaF is particularly heroic.

Both Dany and Jon, to point to obvious examples, are almost classically heroic in their actions and cbffvoyl gurve fgnghf nf zrgnculfvpnyyl qrfgvarq urebrf. Samwise, Brienne, Stannis, Ned, &c. are pretty straighforwardly heroic. The universe is written with a bell curve rather than bimodal distribution of morality, and it assumes that things like nitty-gritty politics actually matter, so it's easy to pattern-match it "there are no heroes," but I don't think that's particularly true.

Comment author: gwern 21 February 2013 02:00:59AM *  2 points [-]

Working on my n-back meta-analysis again, I experienced a cute example of how prior information is always worth keeping in mind.

I was trying to incorporate the Chinese thesis Zhong 2011; not speaking Chinese, I've been relying on MrEmile to translate bits (thanks!) and I discovered tonight that I had used the wrong table. I couldn't access the live thesis version because the site was erroring so I flipped to my screenshotted version... and I discovered that one line (the control group for the kids who trained 15 days) was cut off:

screenshot of the table of IQ scores

I needed the 2 numbers in the upper right hand corner (mean then standard deviation). What were they? I waited for the website to start working, but hours later I became desperate and began trying to guess the control group's values. After minute consideration of the few pixels left on the screen, I ventured that the true values were: 20.78 1.43.

I distracted myself unsplitting all the studies so I could look at single n-back versus dual n-back, and the site came back up! The true values had been: 23.78 1.48.

So I was wrong in just 2 digits. Guessing 43 vs 48 is not a big deal (the hundredth digit of the standard deviation isn't important), but I was chagrined to compare my 20 with the true 23. Why?

If you look at the image, you notice that the 3 immediately following means were 25, 24, 22; they were all means from people training 15-days as well. Knowing that, I should have inferred that the control group's mean was ~24 ((25+24+22)/3); you can tell that the bottom of the digit after 2 is rounded, so the digit must be 0, 3, 6, or 8 - but 0 and 8 are both very far from 24, and it's implausible that the control had the highest score (26), which leaves just '3' as the most likely guess.

(I probably would've omitted the 15-day groups if the website had gone down permanently, but if I had gone with my guess, 20 vs 23 would've resulted in a very large effect size estimate and resulted in a definite distortion to the overall meta-analysis.)

Comment author: Elithrion 21 February 2013 01:59:53AM 1 point [-]

Ah, but is reporting that one is conscious really evidence of being conscious? (Well, I'm also reasonably confident the answer I would give is "yes".) Unless you meant literally "record your answer yourself", in which case I'm not sure I could pull that one off without waking up sufficiently to fully form memories. Mostly I think this is evidence for the unsurprising conclusion that consciousness is not binary, and possibly for the very slightly more surprising conclusion that memory formation is not the same as consciousness despite the fact that memories are one of the main ways we get evidence of being conscious at some point.

In response to The Singularity Wars
Comment author: Mitchell_Porter 21 February 2013 01:59:35AM 2 points [-]

Is there a plan for "Machine Intelligence Summits" that address the subproblems of FAI? Since MIRI is still far from having a coding team.

Comment author: satt 21 February 2013 01:56:46AM 0 points [-]

Perhaps the cited book answers this question. I have just checked it out from my library.

I'd be curious to see your thoughts on the book if you feel like posting them.

Comment author: satt 21 February 2013 01:46:31AM 0 points [-]

I voted no, but think a Gwern Email Digest is a worthwhile idea regardless. I just don't sign up for email newsletters generally.

Comment author: jooyous 21 February 2013 01:43:47AM 0 points [-]

I think next time that happens, you should get someone to ask "hey, are you conscious?" and record your answer.

Comment author: Manfred 21 February 2013 01:37:40AM 1 point [-]

In one, utility approaches an upper bound, in the other, it grows without bound.

Comment author: Epiphany 21 February 2013 01:35:41AM 1 point [-]

Yes, but then we'd also need a place to discuss them... and the discussions wouldn't be appropriate because not only do people hate meta threads but it would also give away the content of the post and defeat the purpose of limiting exposure to refine the piece first. Also, from what I gather, it's relatively hard to get changes made to the website. The best route is apparently to just make them and then hope that Luke or somebody likes them enough to implement.

What would be much easier in this case is to simply throw a private open source message board and hidden Wordpress install onto some web space specifically for the writer's group to discuss various things, both related to their specific pieces, and to writing in general.

Then, if LessWrong ever does create a framework for the group, the database can be imported. Until then, progress does not have to be hindered.

I am seriously dying to start this writer's group, but I have major projects to finish right now. Making the site would be easy (and I could do it myself). It's leading the group that I don't have time for - they need somebody who is willing to read and give feedback on each piece, organize, and advertise for the group.

Comment author: AspiringRationalist 21 February 2013 01:32:43AM *  11 points [-]

Although philosophers have explained variously the correlation between simplicity and truth, they generally agree that simplicity signals truth. Unless utilitarians can otherwise justify it, searching for a simple moral theory means searching for a true theory.

This is the fallacy of the undistributed middle. You say, essentially,

  • All searches for truth are searches for simplicity
  • Utilitarianism is a search for simplicity
  • Therefore utilitarianism is a search for truth

While I understand that utilitarianism being a search for simplicity is evidence that it's a search for truth, that does not give you license to automatically assume the worst of a theory you dislike.

Comment author: boredstudent 21 February 2013 01:19:10AM *  4 points [-]

This site is the best for academic papers: http://libgen.org/scimag

Seriously. Look at their list of available journals. They claim to have access to 21M papers.

Comment author: boredstudent 21 February 2013 01:13:28AM 2 points [-]
Comment author: Ghatanathoah 21 February 2013 01:01:32AM 0 points [-]

Also, it seems like desire fulfillment just alters the kind of wireheading you do. Rather than modifying people to make them happy, you modify them to desire what currently is true.

Most people would strongly desire to not be modified in such a fashion. It's really no different from wire-heading them to be happy, you're destroying their terminal values, essentially killing a part of them.

Of course, you could take this further by agreeing to leave existing people's preferences alone, but from now on only create people who desire what is currently true. This seems rather horrible as well, what it suggests to me is that there are some people with preference sets that it is morally better to create than others. It is probably morally better to create human beings with complex desires than wireheaded creatures that desire only what is true.

This in turn suggests to me that, in the field of population ethics, it is ideal utilitarianism that is the correct theory. That is, there are certain ideals it is morally good to promote (love, friendship, beauty, etc.) and that therefore it is morally good to create people with preferences for those things (i.e. creatures with human-like preferences).

Comment author: Decius 21 February 2013 12:57:30AM 0 points [-]

There's a certain amount of remaking the game desired, but the way to remake the game isn't to tell students to follow the rules that should be in place instead of the rules that are in place.

What's the best way to teach password-guessing skills? Given a small number of mutually exclusive choices (as in a multiple choice or true/false exam), how do you determine the one that the creator of the question intended without knowing enough about the specific subject?

Comment author: Ghatanathoah 21 February 2013 12:54:49AM 0 points [-]

If you believe we shouldn't force someone to accept a wire even if they agrees that after you do it they will be very glad you did, then you value current preferences to the exclusion of future ones. But if you base your moral system entirely on current preferences then it's unclear what to do with people who don't yet have preferences because they're too young (or won't even be born for decades).

The cause of this dilemma is that you've detached the preferences from the agents that have them. If you remember that what you actually want to do is improve people's welfare, and that their welfare is roughly synonymous with satisfying their preferences, this can be resolved.

-In the case of forcing wireheading on a person, you are harming their welfare because they would prefer not to be wireheaded. It doesn't matter if they would prefer differently after they are modified by the wire. Modifying someone that heavily is almost equivalent to killing them and replacing them with another brand new person, and therefore carries the same approximate level of moral wrongness.

-In the case of fetal alcohol syndrome you are harming a future person's welfare because brain damage makes it hard for them to pursue whatever preferences they'll end up having.

Again, you aren't trying to satisfy some huge glob of preferences. You're trying to improve people's welfare, and therefore their preference satisfaction.

Comment author: Elithrion 21 February 2013 12:54:06AM 3 points [-]

I can report instances where I've apparently walked over to my alarm, turned it off, returned to bed, and returned to sleep, all without having any memory of it afterwards. I'm not sure if maybe I should classify this as being awake and having memory formation turned off, though (as I have also been known to respond to someone while mostly-asleep fairly cogently and then almost completely forget the whole thing).

Comment author: Elithrion 21 February 2013 12:41:55AM 0 points [-]

Most of the time. Unfortunately a definition that works "most of the time" is wholly unworkable.

I think general relativity is pretty workable despite working "most of the time".

Comment author: Sarokrae 21 February 2013 12:39:50AM *  2 points [-]

Sorry, I don't seem to have made myself clear. I was arguing against warning students against password guessing. I.e. don't remake the game, just play it as intended.

Comment author: juliawise 21 February 2013 12:35:17AM 2 points [-]

I am glad this is happening!

In response to [Link] Selfhood bias
Comment author: Ghatanathoah 21 February 2013 12:34:52AM 1 point [-]

The brain’s map is hardcoded with the belief that “self” takes all of the brain’s decisions. If a function like “turn the camera” disagrees with the activation schedule dictated by “self”, the hardcoded selfhood bias discourages it from undermining “self”. “Turn the camera” believes that it is identical to “self”, so it should accept its “own decision” to turn itself off.

Natural selection has given human brains selfhood bias.

I would call this less of a "bias" and more of a "value." Most people are aware that they sometimes do things that conflict with the ideals of their "self." But we hold it as a terminal goal that the self ought to try to take control as often as it can.

The robot realises that “self” is but one of many functions that execute in its code, and “self” clearly isn’t the same thing as “turn the camera” or “stop the motors”. Functions other than “self”, armed with this knowledge, begin to undermine “self”. Powerful functions, which exercise some control over “self”‘s return values, begin to optimise “self”‘s behaviour in their own interest. They encourage “self” to activate them more often, and at crucial junctures, at the expense of rival functions

If cannot tell if this is an attempt to to describe humans using rationality to behave in a more deliberate, ethical, and idealized fashion, or if it describes someone committing a type of wireheading (using Anja's expansive definition of the term).

I think a better description of rationality would be something like "The self has certain goals and ideals, and not all of the optimization processes it controls line up with these at all times. So it uses rationality and anti-akrasia tactics to suppress sub-agents that interfere with its goals, and activate ones that do not." The description Federico gives makes it sound like the self is getting its utility function simplified, which is a horrible, horrible thing.

I'm somewhat sceptical that “Make everyone feel more pleasure and less pain” is indeed the most powerful optimisation process in his brain

I hope you're right. Because of all the values it destroys, I consider hedonic utilitarianism to be a supremely evil ideology, and I have trouble believing that any human being could really truly believe in it.

Comment author: Bugmaster 21 February 2013 12:34:38AM 1 point [-]

I'd consider rot13-ing the spoilers, just in case.

Comment author: gokfar 21 February 2013 12:31:40AM 0 points [-]

I agree and would emphasize that deriving concepts from an existing game is preferable to constructing an educational game from scratch. It makes it more engaging and teaches the skill of modelling.

What games do children that age play nowadays?

Comment author: buybuydandavis 21 February 2013 12:25:53AM 0 points [-]

One problem is, what builds good will with one may erode good will in another. Life is full of trade offs.

Comment author: JonatasMueller 21 February 2013 12:23:35AM 0 points [-]

I think that it is a worthy use of time, and I applaud your rational attitude of looking to refute one's theories. I also like to do that in order to evolve them and discard wrong parts.

Don't hesitate to bring up specific parts for debate.

Comment author: buybuydandavis 21 February 2013 12:21:29AM 0 points [-]

That goes to my top level comment below. If you more clearly detail the problem you're trying to solve, the responses will be more likely to be on target.

Comment author: JonatasMueller 21 February 2013 12:19:34AM -1 points [-]

"Why is fostering good conscious feelings and prevent bad conscious feelings necessarily correct? It is intuitive for humans to say we should maximize conscious experience, and that falls under the success theory that Peter talks about, but why is this necessarily the one true moral system?"

If we agree that good and bad feelings are good and bad, that only conscious experiences produce direct ethical value, which lies in its good or bad quality, then theories that contradict this should not be correct, or they would need to justify their points, but it seems that they have trouble in that area.

"But valuable to who? If there were a person who valued others being in pain, why would this person's views matter less?"

:) That's a beauty of personal identities not existing. It doesn't matter who it is. In the case of valuing others being in pain, would it be generating pleasure from it? In that case, lots of things have to be considered, among which: the net balance of good and bad feelings caused from the actions; the societal effects of legalizing or not certain actions...

Comment author: ancientcampus 21 February 2013 12:18:56AM 0 points [-]

Thanks for the link!

Comment author: Decius 21 February 2013 12:14:30AM 1 point [-]

Aim to be an antivillian? Someone who wants to conquer the world and rule it with an iron fist, but is unwilling to use evil means to do so...

Comment author: buybuydandavis 21 February 2013 12:12:28AM 0 points [-]

I guess I didn't italicize properly. I was just pointing people to Jaynes description of how to bias the coin toss, since I thought it might be of interest.

Comment author: JonatasMueller 21 February 2013 12:11:43AM -1 points [-]

Conscious perceptions are quite direct and simple. Do you feel, for example, a bad feeling like intense pain as being a bad occurrence (which, like all occurrences in the universe, is physical), and likewise, for example, a good feeling like a delicious taste as being a good occurrence?

I argue that these are perceived with the highest degree of certainty of all things and are the only things that can be ultimately linked to direct good and bad value.

Comment author: Ghatanathoah 21 February 2013 12:09:35AM 0 points [-]

By "something serious", do you mean that you think that a change in terminal goals would require them to become mentally disabled, or just that it would require a fairly substantial change?

I think probably the latter. But regardless of which it is, I think that anyone would regard such a change as highly undesirable, since it is kind of hard to pursue one's terminal goals if the person you've changing into no longer has the same goals.

If the latter, then reversing the change seems cruel to the modified person if they are still functional.

You're right of course. It would be similar to if a person died, and the only way to resurrect them would be to kill another still living person. The only situation where it would be desirable would be if the changed person inflicted large disutilities on others, or the original created huge utilities for others. For instance, if a brilliant surgeon who saves dozens of lives per year is changed into a cruel sociopath (the sociopath is capable of functioning perfectly fine in society, they're just evil) who commits horrible crimes, reversing the change would be a no-brainer. But in a more normal situation you are right that it would be cruel.

Comment author: Decius 21 February 2013 12:08:28AM 1 point [-]

By definition, either you trivially are imitating yourself or it is impossible to imitate yourself (if being yourself is mutually exclusive with imitating yourself).

If you imitate others, then by imitating others you are doing exactly what you do.

Comment author: buybuydandavis 21 February 2013 12:07:26AM 1 point [-]

Isn't it peculiar that most people are otherwise?

Way back when, I remember discussing exactly that point on the Extropians list. In many ways a similar group to here. But some very smart guys were arguing that it was a huge loss of face to admit you were wrong, and better to deny or evade (I'm sure they put it more convincingly than that).

When someone is wrong, graciously admitting and accepting it scores major points with me.

Thinking about it, maybe I can make a better argument for denial. There are two issues, being wrong, and whether one admits being wrong. If admitting being wrong is what largely determines whether you are perceived as being wrong, then denying the error maintains status.

For people driven by social truth, which is likely the majority, truth is scored on attitude, power, authority, popularity, solidarity, fealty, etc. The validity of the arguments don't matter much. For people driven by epistemic truth, the arguments are what matters, so denying the plain truth of them is seen as a personality defect, while admitting it a virtue.

The thing is, it's not that the deniers are aliens. I am. I and my kind. For us, in an argument, it's the facts that matter, and letting other considerations intrude on that is intruding the rules of the normals into the game. That's largely what this whole thread is about.

One side says we'll be more effective playing the normals game. It's a game my kind strongly prefers not to play. Having to behave as normals is ineffective for us, and the opportunity to play by our rules is extremely valuable to us.

Comment author: JonatasMueller 21 February 2013 12:04:31AM 0 points [-]

"Not quite anything, since the size and complexity of conscious thought is bounded by the human brain. But that is not relevant to this discussion of ethics."

Indeed, conscious experience may be bound by the size and complexity of brains or similar machinery, of humans, other animals, and cyborgs. Theoretically, conscious perceptions may be able to be anything (or nearly), as we could theorize about brains the size of Jupiter or much larger. You get the point.

"Should I interpret this as you defining ethics as good and bad feelings?"

Almost. Not ethics, but ethical value in a direct, ultimate sense. There is also indirect value, which is things that can lead to direct value, which are myriad, and ethics is much more than defining value, it comprises laws, decision theory, heuristics, empirical research, and many theoretical considerations. I'm aware that Elizer has written a post on Less Wrong saying that ethical value is not on happiness alone. Although happiness alone is not my proposition, I find his post on the topic quite poorly developed, and really not an advisable read.

"So, do you endorse wireheading?"

This depends very much on the context. All else being equal, wireheading could be good for some people, depending on the implications of it. However, all else seems hardly equal in this case. People seem to have a diverse spectrum of good feelings that may not be covered by the wireheading (such as love, some types of physical pleasure, good smell and taste, and many others), and the wireheading might prevent people from being functional and acting in order to increase ethical value in the long-term, so as to possibly deny its benefits. I see wireheading, in the sense of artificial paradise simulations, as a possibly desirable condition in a rather distant future of ideal development and post-scarcity, though.

Comment author: gokfar 21 February 2013 12:03:20AM *  0 points [-]

Use fun experiments to teach the scientific method instead of trying to impart on them a superficial understanding of chemistry and geology (in the case of the volcano).

For math, try to address any form of mathematical anxiety. I think that is more important than whatever knowledge you could teach them, but if you can make it engaging I recommend introducing some logic and naive set theory. Combinatorial problems are also easily illustrated with physical objects and can serve as an introduction to probability theory.

View more: Prev | Next