Comment author: MrMind 30 September 2016 08:04:51AM 1 point [-]

The problem with depression is that it skews your entire ability to think clearly and rationally about the future. You're no longer "a rational agent", but "a depressed agent", and it's really bad.
From an outside view, of course only very extreme pain or the certainty of inevitable decline are worth the catastrophic cost of death, but from the pov of a depressed person, all future is bad, black and meaningless, and death seems often the natural way up.

Comment author: Dagon 30 September 2016 04:09:33PM 1 point [-]

Absolutely! Depression changes one's priors and one's perception of evidence, making a depressed agent even further from rational than non-depressed humans (who are even so pretty far from purely rational).

That said, all agents must make choices - that's why we use the term "agent". And even depressed agents can analyze their options using the tools of rationality, and (I hope) make better choices by doing so. It does require more care and use of the outside view to somewhat correct for depression's compromised perceptions.

Also, I'm very unsure what the threshold is where an agent would be better off abandoning attempts to rationally calculate and just accept their group's deontological rules. It's conceivable that if you don't have strong outside evidence that you're top-few-percent of consequentialist predictors of action, you should follow rules rather than making decisions based on expected results. I don't personally like that idea, and it doesn't stop me ignoring rules I don't like, but I acknowledge that I'm probably wrong in that.

Specifically, "Suicide: don't do it" seems like a rule to give a lot of weight to, as the times you're most likely tempted are the times you're estimating the future badly, and those are the times you should give the most weight to rules rather than rationalist calculations.

Comment author: Clarity 29 September 2016 06:58:58AM 0 points [-]

I'm going to contain anything I post to this thread. Just incase it's nonsense. I was just thinking of asking: Is it rational to 'go to Belgium' as they say - to commit suicide as a preventative measure to avoid suffering?

Comment author: Dagon 29 September 2016 02:01:17PM 0 points [-]

I suspect there are cases where a perfectly rational, knowledgeable agent could prefer the suffering of death over the suffering of continued life.

Agents with less calculating power and with less predictive power over their possible futures (say, for instance, humans) should have an extremely low prior about this, and it's hard to imagine the evidence that would bump it into the positive.

Comment author: Houshalter 21 September 2016 08:05:36PM 0 points [-]

Well what if suicide is illegal in the future? And even if it isn't, suicide is really hard to go through with. A lot of people have preferences that they would prefer not to be revived with brain damage, but people with brain damage do not commonly kill themselves.

Comment author: Dagon 22 September 2016 04:23:47PM 2 points [-]

I see this combination of expressed preference and actions (would prefer not to live with brain damage, but then actually choose to live with brain damage) as a failure of imagination and incorrect far-mode statements, NOT as an indication that the prior statement true, but was thwarted by some outside force.

Future-me instances have massively more information about what they're experiencing in the future than present-me has now. It's ludicrous for present-me to try to constrain future-me's decisions, and even more so to try to identify situations where present-me's wishes will be honored but future-me's decisions won't.

You can prevent adverse revival by cremation or burial (in which case you also prevent felicitous revival). If an evil regime wants you, any contract language is useless. If an individual-respecting regime considers your revival, future you would prefer to be revived and asked rather than being held to a past-you document that cannot predict the details of the current the situation very well.

Comment author: Dagon 19 September 2016 02:17:43AM 4 points [-]

Yikes. If your lunatic sensor didn't go off reading this, you should get it adjusted.

From a theoretical standpoint, democratic meritocracies should evolve five IQ defined 'castes', The Leaders, The Advisors, The Followers, The Clueless and The Excluded.

If that doesn't bother you, notice that this guy is putting a lot of weight on really simplistic statistics about the edge cases (the half-percent or less of the population which is very smart and/or is "successful in" one of his preferred "intellectually elite professions"). Oh, I see Gwern actually commented about this in a comment.

Basically, this is a lovely irony of a presumed-high-IQ author jumping to a pretty ridiculous conclusion because he's not willing/able to try to dissolve his questions and do the hard work to be rigorous in his research.

Comment author: The_Jaded_One 12 September 2016 09:27:30AM 0 points [-]

I don't see why this article is on -1 karma at the moment. It's an interesting topic.

Comment author: Dagon 12 September 2016 01:59:06PM 2 points [-]

Interesting, but not relevant to rational thinking. And politics.

Comment author: WhySpace 30 August 2016 05:01:24AM *  7 points [-]

Here's the problem with talking x-risk with cynics who believe humanity is a net negative, and also a couple possible solutions.

Frequently, when discussing the great filter, or averting nuclear war, someone will bring up the notion that it would be a good thing. Humanity has such a bad track record with environmental responsibility or human rights abuses toward less advanced civilizations, that the planet, and by extension the universe, would be better off without us. Or so the argument goes. I've even seen some countersignaling severe enough to argue, somewhat seriously, in favor of building more nukes and weapons, out of a vague but general hatred for our collective insanity, politics, pettiness, etc.

Obviously these aren't exactly careful, step by step arguments, where if I refute some point they'll reverse their decision and decide we should spread humanity to the stars. It's a very general, diffuse dissatisfaction, and if I were to refute any one part, the response would be "ok sure, but what about [lists a thousand other things that are wrong with the world]". It's like fighting fog, because it's not their true objection, at least not quite. It's not like either of us feels like we're on opposite sides of a debate or anything though, so usually pointing out a few simple facts is enough to get a concession that there are exceptions to the rule "humanity sucks". However, obviously refuting all thousand things, one by one, isn't a sound strategy. There really is a lot of bad stuff that humanity has done, and will continue to do I'm sure.

Usually, I try to point at broad improving trends like infant mortality, war, extreme poverty, etc. I'll argue that the media biases our fears by magnifying all the problems that remain. I paint a rosy future of people fighting debater's prisons in the past, debating universal healthcare today, and in the future arguing fiercely over whether money and work are needed at all in their post-scarcity Star Trek economy. Political rights for minorities yesterday, social justice today, argue over any minor inconveniences tomorrow. Starvation yesterday, healthy food for all today, gourmet delicacies free next to drinking fountains tomorrow. I figure they're more likely to accept a future where we never stop arguing, but do so over progressively more petty things, and never realize we're in a utopia.

However, I think I might have better luck trying to counter-counter signal. "Yeah, humanity is pretty messed up, but why do you want to put us out of our misery? Shouldn't we be made to suffer through climate change and everything else we've brought on ourselves, instead of getting off easy? Imagine another thousand years of inane cubical work and a dozen more Trump presidencies. Maybe we'll learn our lesson." [Obviously, I'm joking here.]

I think this might have the advantage of aligning their cynicism with their more charitable impulses, at least the way my conversations tend to go. And there's no impulse to counter-counter-counter-signal, because I've gone up a meta-level and made the counter-signaling game explicit, which releases all the fun available from being contrarian, and moves the conversation toward new sources of amusement. I'll bet we could then proceed to have interesting discussions on how to solve the world's problems. If whoever I'm musing with comes up with a few ideas of their own, maybe they'll even take ownership of the ideas, and start to actually care about saving the world in their own way. I can dream, I suppose.

Comment author: Dagon 30 August 2016 02:01:02PM 4 points [-]

You can also point out the contradiction that they don't seem to be in a hurry to take the obvious first step by killing themselves. Proving that they see at least one human life as a net positive. Then talk about everyone else they don't want to kill or prevent being born.

Be aware, though, that this isn't truth-seeking. It's debate for the fun of it.

Comment author: Manfred 26 August 2016 06:22:03PM 1 point [-]

I find this surprisingly unmotivating. Maybe it's because the only purpose this could possibly have is as blackmail material, and I am pretty good at not responding to blackmail.

Comment author: Dagon 27 August 2016 01:13:07AM 0 points [-]

You say blackmail, I say altruistic punishment.

In response to Hedging
Comment author: Dagon 26 August 2016 04:28:47PM 1 point [-]

It matters a lot who your audience is, and what are your goals in a specific interaction. Fluttershy's points about status-signaling are a great example of ways that precision can be at odds with effectiveness.

Also, you're probably wrong in most of your frequency estimates. Section III of this SlateStarCodex post helps explain why - you live in a bubble, and your experiences are not representative of most of humanity.

Unless you're prepared to explain your reference set (20% of what exactly?) cite sources for your measures, it's worth acknowledging that you don't know what you're talking about, and perhaps just not talking about it.

Rather than caveat-ing or specifying your degree in belief about percentage and definition of of evil men, just don't bother. Walk away from conversations that draw you into useless generalizations.

In other words, your example is mind-killing to start with. No communication techniques or caveats can make a discussion of how much you believe what percentage of men are evil work well. And I suspect that if you pick non-politically-charged examples, you'll find that the needed precision is already part of the discussion.

Comment author: WalterL 25 August 2016 08:27:21PM -2 points [-]

Saw the site mentioned on Breibart:

Link: http://www.breitbart.com/tech/2016/03/29/an-establishment-conservatives-guide-to-the-alt-right/

Money Quote:

...Elsewhere on the internet, another fearsomely intelligent group of thinkers prepared to assault the secular religions of the establishment: the neoreactionaries, also known as #NRx.

Neoreactionaries appeared quite by accident, growing from debates on LessWrong.com, a community blog set up by Silicon Valley machine intelligence researcher Eliezer Yudkowsky. The purpose of the blog was to explore ways to apply the latest research on cognitive science to overcome human bias, including bias in political thought and philosophy.

LessWrong urged its community members to think like machines rather than humans. Contributors were encouraged to strip away self-censorship, concern for one’s social standing, concern for other people’s feelings, and any other inhibitors to rational thought. It’s not hard to see how a group of heretical, piety-destroying thinkers emerged from this environment — nor how their rational approach might clash with the feelings-first mentality of much contemporary journalism and even academic writing.

Led by philosopher Nick Land and computer scientist Curtis Yarvin, this group began a ..."

I wasn't around back in the day, but this is nonsense, right? Nrx didn't start on lesswrong, yeah?

Comment author: Dagon 25 August 2016 08:45:00PM -1 points [-]

I was around back in the day, and can confirm that this is nonsense. NRX evolved separtely. There was a period where it was of interest and explored by a number of LW contributors, but I don't think any of the thought leaders of either group were significantly influential to the other.

There is some philosophical overlap in terms of truth-seeking and attempted distinction between universal truths and current social equilibria, but neither one caused nor grew from the other.

In response to comment by Dagon on Identity map
Comment author: turchin 16 August 2016 08:14:42AM 2 points [-]

I saw people who attempted to do it in real life, and they speak like "my brain knows that he wants go home" instead of "I want to go home".

The problem is that even if we get rid of absolute Self and Identity we still have practical idea of me, which is built in our brain, thinking and language. And without it any planing is impossible. I can't go to the shop without expecting that I will get a dinner in one hour. But all problems with identity are also practical: should I agree to be uploaded etc.

There is also problem of oneness of subjective experience. That is there is clear difference between the situation there I will experience pain and other person's pain. While from EA point of view it is almost the same, it is only moral upgrade of this fact.

In response to comment by turchin on Identity map
Comment author: Dagon 16 August 2016 03:50:57PM 1 point [-]

"my brain knows that he wants go home" instead of "I want to go home".

I'll admit to using that framing sometimes, but mostly for amusement. In fact, it doesn't solve the problem, as now you have to define continuity/similarity for "my brain" - why is it considered the same thing over subsequent seconds/days/configurations?

I didn't mean to say (and don't think) that we shouldn't continue to use the colloquial "me" in most of our conversations, when we don't really need a clear definition and aren't considering edge-cases or bizarre situations like awareness of other timelines. It's absolutely a convenient, if fuzzy and approximate, set of concepts.

I just meant that in the cases where we DO want to analyze boundaries and unusual situations, we should recognize the fuzziness and multiplicity of concepts embedded in the common usage, and separate them out before trying to use them.

View more: Next