Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: RainbowSpacedancer 21 January 2017 02:31:18PM *  1 point [-]

I'm working on an overview of the science on spiritual enlightenment. I'm also looking into who has credible claims to it, whether it is something worth pursuing and a survey of the methods used to get there.

If anyone knows someone (or is someone) that thinks they might be there or part-way there and who would be willing to chat a bit, that'd be lovely. If you've just dabbled in some mystical practices and had a few strange experiences and want to bounce some ideas around, that could be fun too.

Comment author: moridinamael 22 January 2017 03:44:39AM 2 points [-]

This blog doesn't appear to be active anymore, but it contains a lot of helpful ideas from an LWer who was an experienced meditator.

The blog led me to buy the book The Mind Illuminated which is a very clear, thorough, secular and neurologically sound (where possible) manual on attaining classical enlightenment through vipassana+mindfulness. I'm currently trying to follow its program as well as I can.

Comment author: NatashaRostova 20 January 2017 06:27:31PM 0 points [-]

Can we elect a dictator?

Comment author: moridinamael 20 January 2017 07:50:57PM 2 points [-]
Comment author: shev 19 January 2017 08:00:00PM *  2 points [-]

I only heard this phrase "postrationality" for the first time a few days ago, maybe because I don't keep up with the rationality-blog-metaverse that well, and I really don't understand it.

All the descriptions I come across when I look for them seem to describe "rationality, plus being willing to talk about human experience too", but I thought the LW-sphere was already into talking about human experience and whatnot. So is it just "we're not comfortable talking about human experience on in the rationalist sphere so we made our own sphere"? That is, a cultural divide?

That first link writes "Postrationality recognizes that System 1 and System 2 (if they even exist) have different strengths and weaknesses, and what we need is an appropriate interplay between the two.". Yet I would imagine everyone on LW would be interested in talking about System 1 and how it works and anything interesting we can say about it. So what's the difference?

Comment author: moridinamael 19 January 2017 08:50:19PM 2 points [-]

Obviously different people do things for different reasons, but I infer that a lot of people started identifying as post-rationalist when they felt it was no longer cool to be associated with the rationalist movement. There have been a number of episodes of Internet drama over the last several years, any one of which might be alienating to some subset of people; those people might still like a lot of these ideas, but feel rejected from the "core group" as they perceive it.

The natural Schelling point for people who feel rejected by the rationality movement is to try to find a Rationality 2.0 movement that has all the stuff they liked without the stuff they didn't like. This Schelling point seems to be stable regardless of whether Rationality 2.0 has any actual content or clear definition.

Comment author: ingive 19 January 2017 06:27:41PM 0 points [-]

In which case, since your whole philosophy seems to depend on the universe not being deterministic, you should scream "oops!" and look for where you went wrong, not try to come up with some way to quickly patch over the problem without thinking about it too hard.

I'm glad that it's clarified, indeed it relies on the universe not being deterministic. However, I do think that a belief in a deterministic universe has an easier time for its agents to go against their utility so my philosophy might boil down more to one's emotions, probably what even put Humes to philosophize about this in the first place. He has apparently talked a lot about emotions/rationality duality and probably contradicted himself on 'is-ought' in his own statements.

You learn that an innocent is going to be murdered. That 'is', so what force compels you to intervene?

Is tells me what I should write to your hypothetical scenario to align you more with reality, rather than continuing the intellectual masturbation. Which philosophers are notorious for, all talk, no action.

The universe is full of suffering. That 'is'. So you ought to spread and cause suffering? If not, what is your basis for saying so?

We are naturally aligned into the decrease of suffering, I don't know exactly, so what is is in every moment whereas the low hanging fruit has to be picked up in for example poverty reduction. Long-term probably awareness of humans like you and I, the next on the list might be an existential risk reduction, seems to be high expected value.

Comment author: moridinamael 19 January 2017 06:40:33PM 1 point [-]

Is tells me what I should write to your hypothetical scenario to align you more with reality, rather than continuing the intellectual masturbation. Which philosophers are notorious for, all talk, no action.

Not sure what this means. If "Just align with reality!" is your guiding ethical principle, and it doesn't return answers to ethical questions, it is useless.

We are naturally aligned into the decrease of suffering,

Naw, we're naturally aligned to decrease our own suffering. Our natural impulses and ethical intuitions are frequently mutually contradictory and a philosophy of just going with whatever feels right in the moment is (a) not going to be self-consistent and (b) pretty much what people already do, and definitely doesn't require "clicking".

Sufficiently wealthy and secure 21st century Westerners sometimes conclude that they should try to alleviate the suffering of others, for a complex variety of reasons. This also doesn't require or "clicking".

By the way, you seem to have surrendered on several key points along the way without acknowledging or perhaps realizing it. I think it might be time for you to consider whether your position is worth arguing for at all.

Comment author: JacobLiechty 19 January 2017 05:39:26PM *  4 points [-]

As a data point, I definitely experienced a "loss of locus" on Less Wrong a couple years ago when it seemed that the quality of the central themes had been dying down. There was less abuzz about progress being made on what seemed like fundamental topics, and that lack of excitement drove away the high quality participants.

I tend to think that the thing that can bring back LW would be similar sets of insights. While marginal improvements to moderation structure and visibility are great, people want to come back to LW for the same reasons that brought them here in the first place. The creation of other loci doesn't need to be either encouraged our discouraged; LW can just be one particular kind of locus for LW-type things.

I personally have been excited by the recent attempts of bringing it back, and I'm hungry for better and newer content and discussion. I think a huge topic that hasn't adequately been hashed out in a LW-type way are the spat of new writings roughly encompassing the "post" or "meta" rationalist sphere, with Keganism at the root. I've only seen a few brief, almost confused mentions of these writings in and around rationalist Facebook, but no longer-form, well-written serious explorations in LW form. But what's been fascinating about these mentions is the level of intrigue that rationalists seem to have for these ideas, without necessarily buying into them directly. There's an entire diaspora of rationalists almost afraid to identify as Keganites/Chapmanites/Metarationalists, for fear it contravenes their rationalist principles. Seems at least good fodder for creating some more broken-down mathematical and philosophical explorations of where that intrigue comes from, how it relates to The Way, and possibly a more complete critique of the rationalist program without all the negativity.

Edit: And yes, this is a new account! I'm rebranding with my personal identity in the EA sphere, since I've begun to meet many people in real life and have continued plans to contribute and collaborate!

Comment author: moridinamael 19 January 2017 06:05:30PM *  6 points [-]

Google suggests nothing helpful to define Keganism, and that Keganites are humans from the planet Kegan in the Star Wars Expanded Universe. Could you point me to something about the Keganism you're referring to?

FWIW I view a lot of the tension between/within the rationality community regarding post-rationality as usually rooted in tribal identification more than concrete disagreement. If rationality is winning, then unusual mental tricks and perspectives that help you win are part of instrumental rationality. If some of those mental tricks happen to infringe upon a pristine epistemic rationality, then we just need a more complicated mental model of what rationality is. Or call it post-rationality, I don't really care, except for the fact that labels like post-rationality connotationally imply that rationality has to be discarded and replaced with some other thing, which isn't true. Rationality is and always was an evolving project and saying you're post- something that's evolving to incorporate new ideas is getting ahead of yourself.

In other words, any valid critique of rationality becomes part of rationality. We are Borg. Resistance is futile.

Comment author: ingive 19 January 2017 05:11:39PM 0 points [-]
  1. With that interpretation, not Copenhagen. I'm unsure, because inherently, can we really be certain of absolutes because of our lack of understanding of the human brain? I think that how memory storage and how the brain works shows us that we can't be certain of our own knowledge.

  2. If you are right with that the universe is deterministic then what ought to be is what is. But if you ought to do the opposite from what 'is' tell us, what are you doing then? You are not allowed to have a goal which is not aligned with what is because that goes against what you are. I do agree with you now however, I think that this is semantics. I think it was a heuristic. But then I'll say "What is, is what you ought to be".

Comment author: moridinamael 19 January 2017 05:32:33PM 1 point [-]

If reasonable people can disagree regarding Copenhagen vs. Many Worlds, then reasonable people can disagree on whether the universe is deterministic. In which case, since your whole philosophy seems to depend on the universe not being deterministic, you should scream "oops!" and look for where you went wrong, not try to come up with some way to quickly patch over the problem without thinking about it too hard.

Also: How could 'is' ever tell you what to do?

An innocent is murdered. That 'is'. So it's okay?

You learn that an innocent is going to be murdered. That 'is', so what force compels you to intervene?

The universe is full of suffering. That 'is'. So you ought to spread and cause suffering? If not, what is your basis for saying so?

Comment author: ingive 19 January 2017 03:51:36PM *  0 points [-]

If someone wins the Nobel prize you heard it here first.

The is-ought problem implies that the universe is deterministic, which is incorrect, it's an infinite range of possibilities or probabilities which are consistent but can never be certain. Humes beliefs about is-ought came from his own understanding of his emotions and those around him's emotions. He correctly presumed that it is what drives us and that logic and rationality could not (thus not ought to be in any way because things are) and thought the universe is deterministic (without the knowledge of the brain and QM). The insight he's not aware of that even though his emotions are the driving factor, he misses out that he can emotionally be with rationality and logic, facts, so there is no ought to be from what is. 'What is' implies facts, rationality, and logic and so on, EA/Utilitarian ideas. The question about free will is an emotional one if you are aware your subjective reference frame, awareness, was a part of it then you can let go of that.

Comment author: moridinamael 19 January 2017 04:21:35PM 1 point [-]
  1. The universe is deterministic.

  2. You seem to be misunderstanding is-ought. The point is that you cannot conclude what ought to be, or what you ought to do, from what is. You can conclude what you ought to do in order to achieve some specific goal, but you cannot infer "evolutionary biology, therefor effective altruism". You are inserting your own predisposition into that chain and pretending it is a logical consequence.

Comment author: ingive 19 January 2017 02:55:57PM 0 points [-]

You're welcome to explain why this isn't the case. I'm thinking mostly about neuroscience and evolutionary biology. It tells us everything.

Comment author: moridinamael 19 January 2017 03:08:20PM 1 point [-]

Is-ought divide. If you have solved this problem, mainstream philosophy wants to know.

Comment author: Tyrin 17 January 2017 11:32:57PM *  0 points [-]

I didn't mean 'similar'. I meant that it is equivalent to Bayesian updating with a lot of noise. The great thing about recursive Bayesian state estimation is that it can recover from noise by processing more data. Because of this, noisy Bayes is a strict subset of noise-free Bayes, meaning pure rationality is basically noise-free Bayesian updating. That idea contradicts the linked article claiming that rationality is somehow more than that.

There is no plausible way in which the process by which this meme has propagated can be explained by Bayesian updating on truth value.

An approximate Bayesian algorithm can temporarily get stuck in local minima like that. Remember also that the underlying criterion for updating is not truth, but reward maximization. It just happens to be the case that truth is extremely useful for reward maximization. Evolution did not achieve to structure our species in a way that makes it make it obvious for us how to balance social, aesthetic, …, near-term, long-term rewards to get a really good overall policy in our modern lives (or really in any human life beyond multiplying our genes in groups of people in the wilderness). Because of this people get stuck all the time in conformity, envy, fear, etc., when there are actually ways of suppressing ancient reflexes and emotions to achieve much higher levels of overall and lasting happiness.

Comment author: moridinamael 19 January 2017 03:01:36PM *  0 points [-]

Let's taboo "identical".

In the limit of time and information, natural selection, memetic propagation, and Bayesian inference all converge on the same result. (Probably(?))

In reality, in observable timeframes, given realistic conditions, neither natural selection nor memetic propagation will converge on Bayesian inference; if you try to model evolution or memetic propagation with Bayesian inference, you will usually be badly wrong, and sometimes catastrophically so; if you expect to be able to extract something like a Bayes score by observing the movement of a meme or gene through a population, the numbers you extract will be badly inaccurate most of the time.

Both of the above are true. I think you are saying the first one, while I am focusing on the second one. Do you agree? If so, our disagreement is a boring semantic one.

Comment author: Dagon 19 January 2017 06:27:39AM 0 points [-]

"don't kill an operator" seems like something that can more easily be encoded into an agent than "allow operators to correct things they consider undesirable when they notice them".

In fact, even a perfectly corrigible agent with such a glaring initial flaw might kill the operator(s) before they can apply the corrections, not because they are resisting correction, but just because it furthers whatever other goals they may have.

Comment author: moridinamael 19 January 2017 02:44:29PM *  0 points [-]

You're exactly right, I think. IMO it may actually be easier to build an AI that can learn to want what some target agent wants, than to build an AI that lets itself be interfered with by some operator whose goals don't align with its own current goals.

View more: Next