You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

[Link] SSC: It's Bayes All The Way Up

2 Houshalter 28 September 2016 06:06PM

When does heritable low fitness need to be explained?

15 DanArmak 10 June 2015 12:05AM

Epistemic status: speculating about things I'm not familiar with; hoping to be educated in the comments. This post is a question, not an answer.

ETA: this comment thread seems to be leading towards the best answer so far.

There's a question I've seen many times, most recently in Scott Alexander's recent links thread. This latest variant goes like this:

Old question “why does evolution allow homosexuality to exist when it decreases reproduction?” seems to have been solved, at least in fruit flies: the female relatives of gayer fruit flies have more children. Same thing appears to be true in humans. Unclear if lesbianism has a similar aetiology.

Obligate male homosexuality greatly harms reproductive fitness. And so, the argument goes, there must be some other selection pressure, one great enough to overcome the drastic effect of not having any children. The comments on that post list several other proposed answers, all of them suggesting a tradeoff vs. a benefit elsewhere: for instance, that it pays to have some proportion of gay men who invest their resources in their nieces and nephews instead of their own children.

But how do we know if this is a valid question - if the situation really needs to be explained at all?

continue reading »

SSC Discussion: No Time Like The Present For AI Safety Work

6 tog 05 June 2015 02:34AM

(Continuing the posting of select posts from Slate Star Codex for comment here, for the reasons discussed in this thread, and as Scott Alexander gave me - and anyone else - permission to do with some exceptions.)

Scott recently wrote a post called No Time Like The Present For AI Safety Work. It makes the argument for the importance of organisations like MIRI thus, and explores the last two premises:

1. If humanity doesn’t blow itself up, eventually we will create human-level AI.

2. If humanity creates human-level AI, technological progress will continue and eventually reach far-above-human-level AI

3. If far-above-human-level AI comes into existence, eventually it will so overpower humanity that our existence will depend on its goals being aligned with ours

4. It is possible to do useful research now which will improve our chances of getting the AI goal alignment problem right

5. Given that we can start research now we probably should, since leaving it until there is a clear and present need for it is unwise

I placed very high confidence (>95%) on each of the first three statements – they’re just saying that if trends continue moving towards a certain direction without stopping, eventually they’ll get there. I had lower confidence (around 50%) on the last two statements.

Commenters tended to agree with this assessment; nobody wanted to seriously challenge any of 1-3, but a lot of people said they just didn’t think there was any point in worrying about AI now. We ended up in an extended analogy about illegal computer hacking. It’s a big problem that we’ve never been able to fully address – but if Alan Turing had gotten it into his head to try to solve it in 1945, his ideas might have been along the lines of “Place your punch cards in a locked box where German spies can’t read them.” Wouldn’t trying to solve AI risk in 2015 end in something equally cringeworthy?

As always, it's worth reading the whole thing, but I'd be interested in the thoughts of the LessWrong community specifically.

SSC discussion: "bicameral reasoning", epistemology, and scope insensitivity

6 tog 27 May 2015 05:08AM

(Continuing the posting of select posts from Slate Star Codex for comment here, as discussed in this thread, and as Scott Alexander gave me - and anyone else - permission to do with some exceptions.)

Scott recently wrote a post called Bicameral Reasoning. It touches on epistemology and scope insensitivity. Here are some excerpts, though it's worth reading the whole thing:

Delaware has only one Representative, far less than New York’s twenty-seven. But both states have an equal number of Senators, even though New York has a population of twenty million and Delaware is uninhabited except by corporations looking for tax loopholes.

[...]

I tend to think something like “Well, I agree with this guy about the Iraq war and global warming, but I agree with that guy about election paper trails and gays in the military, so it’s kind of a toss-up.”

And this way of thinking is awful.

The Iraq War probably killed somewhere between 100,000 and 1,000,000 people. If you think that it was unnecessary, and that it was possible to know beforehand how poorly it would turn out, then killing a few hundred thousand people is a really big deal. I like having paper trails in elections as much as the next person, but if one guy isn’t going to keep a very good record of election results, and the other guy is going to kill a million people, that’s not a toss-up.

[...]

I was thinking about this again back in March when I had a brief crisis caused by worrying that the moral value of the world’s chickens vastly exceeded the moral value of the world’s humans. I ended up being trivially wrong – there are only about twenty billion chickens, as opposed to the hundreds of billions I originally thought. But I was contingently wrong – in other words, I got lucky. Honestly, I didn’t know whether there were twenty billion chickens or twenty trillion.

And honestly, 99% of me doesn’t care. I do want to improve chickens, and I do think that their suffering matters. But thanks to the miracle of scope insensitivity, I don’t particularly care more about twenty trillion chickens than twenty billion chickens.

Once again, chickens seem to get two seats to my moral Senate, no matter how many of them there are. Other groups that get two seats include “starving African children”, “homeless people”, “my patients in hospital”, “my immediate family”, and “my close friends”.

[...]

I’m tempted to say “The House is just plain right and the Senate is just plain wrong”, but I’ve got to admit that would clash with my own very strong inclinations on things like the chicken problem. The Senate view seems to sort of fit with a class of solutions to the dust specks problem where after the somethingth dust speck or so you just stop caring about more of them, with the sort of environmentalist perspective where biodiversity itself is valuable, and with the Leibnizian answer to Job.

But I’m pretty sure those only kick in at the extremes. Take it too far, and you’re just saying the life of a Delawarean is worth twenty-something New Yorkers.

Thoughts?

SSC discussion: growth mindset

7 tog 11 April 2015 03:13PM

(Continuing the posting of select posts from Slate Star Codex for comment here, as discussed in this thread, and as Scott gave me - and anyone else - permission to do with some exceptions.)

Scott Alexander recently posted about growth mindset, with a clarificatory followup post here. He discussed some possible weaknesses of its advocates - as well as their strength. Here's a quote outlining the positions discussed:

[Bloody Obvious Position]: innate ability might matter, but that even the most innate abilityed person needs effort to fulfill her potential. If someone were to believe that success were 100% due to fixed innate ability and had nothing to do with practice, then they wouldn’t bother practicing, and they would fall behind. [...]

[Somewhat Controversial Position]: The more children believe effort matters, and the less they believe innate ability matters, the more successful they will be. This is because every iota of belief they have in effort gives them more incentive to practice. A child who believes innate ability and effort both explain part of the story might think “Well, if I practice I’ll become a little better, but I’ll never be as good as Mozart. So I’ll practice a little but not get my hopes up.” A child who believes only effort matters, and innate ability doesn’t matter at all, might think “If I practice enough, I can become exactly as good as Mozart.” Then she will practice a truly ridiculous amount to try to achieve fame and fortune. This is why growth mindset works.

[Very Controversial Position]: Belief in the importance of ability directly saps a child’s good qualities in some complicated psychological way. It is worse than merely believing that success is based on luck, or success is based on skin color, or that success is based on whatever other thing that isn’t effort. It shifts children into a mode where they must protect their claim to genius at all costs, whether that requires lying, cheating, self-sabotaging, or just avoiding intellectual effort entirely. When a fixed mindset child doesn’t practice as much, it’s not because they’ve made a rational calculation about the utility of practice towards achieving success, it’s because they’ve partly or entirely abandoned success as a goal in favor of the goal of trying to convince other people that they’re Smart.

 

Carol Dweck unambiguously believes the Very Controversial Position. 

Slate Star Codex: alternative comment threads on LessWrong?

28 tog 27 March 2015 09:05PM

Like many Less Wrong readers, I greatly enjoy Slate Star Codex; there's a large overlap in readership. However, the comments there are far worse, not worth reading for me. I think this is in part due to the lack of LW-style up and downvotes. Have there ever been discussion threads about SSC posts here on LW? What do people think of the idea occasionally having them? Does Scott himself have any views on this, and would he be OK with it?

Update:

The latest from Scott:

I'm fine with anyone who wants reposting things for comments on LW, except for posts where I specifically say otherwise or tag them with "things i will regret writing"

In this thread some have also argued for not posting the most hot-button political writings.

Would anyone be up for doing this? Ataxerxes started with "Extremism in Thought Experiments is No Vice"