Comment author: dthunt 23 October 2014 05:26:17PM 46 points [-]

Survey complete!

I'm kind of surprised at how much better I feel like I've gotten about reasoning about these really fuzzy estimates. One of my big goals last year was "get better at reasoning about really fuzzy things" and I feel like I've actually made big progress on that?

I'm really excited to see what the survey results look like this year. I'm hoping we've gotten better at overconfidence!

The gender default thing took me by surprise. I'm guessing that a lot of people answer yes to having a strong gender identity?

In response to comment by kilobug on On Caring
Comment author: MugaSofer 10 October 2014 01:47:24PM 6 points [-]

I'm not a vegetarian, it would be quite hypocritical for me to invest resources in saving one bird for "care" reasons and then going to eat a chicken at dinner.

This strikes me as backward reasoning - if your moral intuitions about large numbers of animals dying are broken, isn't it much more likely that you made a mistake about vegetarianism?

(Also, three dollars isn't that high a value to place on something. I can definitely believe you get more than $3 worth of utility from eating a chicken. Heck, the chicken probably cost a good bit more than $3.)

In response to comment by MugaSofer on On Caring
Comment author: dthunt 17 October 2014 06:08:22PM 2 points [-]

Hey, I just wanted to chime in here. I found the moral argument against eating animals compelling for years but lived fairly happily in conflict with my intuitions there. I was literally saying, "I find the moral argument for vegetarianism compelling" while eating a burger, and feeling only slightly awkward doing so.

It is in fact possible (possibly common) for people to 'reason backward' from behavior (eat meat) to values ("I don't mind large groups of animals dying"). I think that particular example CAN be consistent with your moral function (if you really don't care about non-human animals very much at all) - but by no means is that guaranteed.

In response to comment by dthunt on Questions on Theism
Comment author: V_V 10 October 2014 09:28:48AM *  1 point [-]

The problem is that people with serious psychotic disorders, the type of people who have this kind of hallucinations while not on drugs, are not just "hearing voices" and "seeing things" as if somebody hacked into their auditory and optical nerves and inserted extraneous signals. These people are really incapable of thinking clearly and rationally evaluating evidence.
It is probably like they are living in a dream-like state while they are awake.

In response to comment by V_V on Questions on Theism
Comment author: dthunt 10 October 2014 02:19:56PM *  0 points [-]

Yeah, what I didn't say is, "If I became psychotic, and had a hallucination of god, I would probably not long-term believe it." There are other reasons people can arrive at a state where they have hallucinations. If you break my critical faculties, then I'm far less likely to reason well.

I was able to find numbers suggesting that perhaps 1/4 people with schizophrenia have religious hallucinations, but I was unable to find out what percentage of people who report religious hallucinations serious suffer psychotic disorders. I do know that religious visions are widely claimed within certain communities, that various drugs, sleep depression, stress, are all things that raise the odds of having hallucinations, and there are perhaps 3% schizotypal folks out there, who are somewhat likely to be hallucinators but may not meet your bar for "serious psychotic disorder".

I've sort of been assuming that while hallucinations are fairly a strong predictor of, say, "schizophrenia", that other factors than serious-brain-whammy drive the bulk of religious hallucinations.

Comment author: ChristianKl 09 October 2014 09:26:53PM 3 points [-]

Even without propagation of math lessons it's generally taught that evolution doesn't find optimal solution but just solutions that are good enough.

It's also worth noting that various if you do an infinitive amount of minor design changes you can find global maxima. If I remember right the Metropolis–Hastings algorithm does get you a global maxima provided you turn the parameters right and wait long enough. It might take longer than trying every single possible value but if you just wait long enough you will get to your maxima.

Biologists also are often happy with solutions that aren't 100% perfect. The standard for truth is often statistical significance.

Comment author: dthunt 10 October 2014 06:59:44AM 0 points [-]

Yes, I agree with everything you say (- well, I don't know the M-H algorithm, but I'll take that on faith).

I mentioned this explicitly because it's mindblowingly bad to see someone saying this, with this background, when he says so many other smart things that clearly imply he understands the general principle of local optimizations not being global optimizations.

What he didn't say is, "This enzyme works really well, and we can be pretty confident evolution has tried out most of the easy modifications on the current structure. It's not perfect (admittedly), but it's locally pretty good."

It was more along the lines of, "We can be confident this is the best possible version of this enzyme."

Anyway, a single human biologist isn't the point. I'm much more interested in questions like, how often can I use local optima in an argument, and people will know what I mean / not think I'm crazy for suggesting there are other hills that might be stood upon.

Comment author: Nornagest 09 October 2014 05:20:19PM *  2 points [-]

I think I'd expect PhD biologists at good universities (or, at least, those working with evolutionary systems) to be aware that hill-climbing processes often get stuck in local optima.

Comment author: dthunt 09 October 2014 06:20:56PM 0 points [-]

I would assume the same, but unfortunately... that's a real life thing that I heard one say in a lecture. Well, not "Global maximum!" but something with essentially identical meaning, without the subtext of big error.

People may be aware of a lesson learned from math, but not propagate it through all their belief systems.

In response to comment by VAuroch on On Caring
Comment author: AnthonyC 09 October 2014 01:02:33PM 0 points [-]

Also, I forget which post (or maybe HPMOR chapter) I got this from, but... it is not useful to assign fault to a part of the system you cannot change, and dividing by the size of the pre-existing altruist (let alone EA) community still leaves things feeling pretty huge.

In response to comment by AnthonyC on On Caring
Comment author: dthunt 09 October 2014 06:16:29PM *  1 point [-]

It's Harry talking about Blame, chapter 90. (It's not very spoily, but I don't know how the spoiler syntax works and failed after trying for a few minutes)

"That's not how responsibility works, Professor." Harry's voice was patient, like he was explaining things to a child who was certain not to understand. He wasn't looking at her anymore, just staring off at the wall to her right side. "When you do a fault analysis, there's no point in assigning fault to a part of the system you can't change afterward, it's like stepping off a cliff and blaming gravity. Gravity isn't going to change next time. There's no point in trying to allocate responsibility to people who aren't going to alter their actions. Once you look at it from that perspective, you realize that allocating blame never helps anything unless you blame yourself, because you're the only one whose actions you can change by putting blame there. That's why Dumbledore has his room full of broken wands. He understands that part, at least."

I don't think I understand what you wrote, there AnthonyC; world-scale problems are hard, not immutable.

In response to comment by VAuroch on On Caring
Comment author: AnthonyC 09 October 2014 01:02:33PM 0 points [-]

Also, I forget which post (or maybe HPMOR chapter) I got this from, but... it is not useful to assign fault to a part of the system you cannot change, and dividing by the size of the pre-existing altruist (let alone EA) community still leaves things feeling pretty huge.

In response to comment by AnthonyC on On Caring
Comment author: dthunt 09 October 2014 06:11:59PM 1 point [-]

Having a keen sense for problems that exist, and wanting to demolish them and fix the place from which they spring is not an instinct to quash.

That it causes you emotional distress IS a problem, insofar as you have the ability to perceive and want to fix the problems in absence of the distress. You can test that by finding something you viscerally do not care for and seeing how well your problem-finder works on it; if it's working fine, the emotional reaction is not helpful, and fixing it will make you feel better, and it won't come at the cost of smashing your instincts to fix the world.

In response to comment by dthunt on Questions on Theism
Comment author: Lumifer 09 October 2014 05:41:39PM 2 points [-]

If I hallucinated a discussion with God, I would probably not be long-term convinced of it.

Heh. Here be circles :-)

If I hallucinated a discussion with God, I would probably not be long-term convinced of it.

If I had a real discussion with God, I would probably be long-term convinced of it.

why did I add anything past the first sentence?

That's how conversations go...

Comment author: dthunt 09 October 2014 05:55:48PM 0 points [-]

If I had a REAL discussion with Actual God, he might just rewire me because I had a bug, and he's a cool guy.

Alternatively, I might ask God for evidence that he's God, or at least an awesome alien teenager with big angelic powers, and get some predictions and stuff out of him that I can use to verify that something incredible is in fact happening, because, hey, I'm human, and humans occasionally hallucinate, and I would probably like to make sound arguments that I really did have a discussion with a guy with big angelic powers that I could share with other people.

But if he can't deliver on that stuff, the fact that I had a memory of talking to God with strong emotions and stuff attached to it, would probably not stand up to the amount of scrutiny that I'm likely to throw at it.

In response to On Caring
Comment author: dthunt 09 October 2014 05:49:56PM 1 point [-]

I would like to subscribe to your newsletter!

I've been frustrated recently by people not realizing that they are arguing that if you divide responsibility up until it's a very small quantity, then it just goes away.

In response to comment by RichardKennaway on On Caring
Comment author: Lumifer 09 October 2014 05:13:35PM 2 points [-]

These verbal contortions do not look convincing.

The claimed moral equivalence is between buying shoes and killing -- not saving -- a child. It's also claimed equivalence between actions, not between values.

In response to comment by Lumifer on On Caring
Comment author: dthunt 09 October 2014 05:35:25PM 0 points [-]

Reminds me of the time the Texas state legislature forgot that 'similar to' and 'identical to' are reflexive.

I'm somewhat persuaded by arguments that choices not made, which have consequences, like X preventably dying, can have moral costs.

Not INFINITELY EXPLODING costs, which is what you need in order to experience the full brunt of responsibility of "We are the last two people alive, and you're dying right in front of me, and I could help you, but I'm not going to." when deciding to buy shoes or not, when there are 7 billion of us, and you're actually dying over there, and someone closer to you is not helping you.

View more: Prev | Next