Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: hairyfigment 18 September 2017 09:09:59PM 0 points [-]

Sort of reminds me of that time I missed out on a lucid dream because I thought I was in a simulation. In practice, if you see a glitch in the Matrix, it's always a dream.

I find it interesting that we know humans are inclined to anthropomorphize, or see human-like minds everywhere. You began by talking about "entities", as if you remembered this pitfall, but it doesn't seem like you looked for ways that your "deception" could stem from a non-conscious entity. Of course the real answer (scenario 1) is basically that. You have delusions, and their origin lies in a non-conscious Universe.

Comment author: Erfeyah 18 September 2017 08:35:14PM 0 points [-]

Good idea, let me try that.

I am pointing to his argument on our [communication] of moral values as cultural transmission through imitation, rituals, myth, stories etc. and the [indication of their correspondence with actual characteristics of reality] due to their development through the evolutionary process as the best rational explanation of morality I have come across.

And you should care because... you care about truth and also because, if true, you can put some attention to the wisdom traditions and their systems of knowledge.

Comment author: hairyfigment 18 September 2017 08:58:15PM 0 points [-]

The second set of brackets may be the disconnect. If "their" refers to moral values, that seems like a category error. If it refers to stories etc, that still seems like a tough sell. Nothing I see about Peterson or his work looks encouraging.

Rather than looking for value you can salvage from his work, or an 'interpretation consistent with modern science,' please imagine that you never liked his approach and ask why you should look at this viewpoint on morality in particular rather than any of the other viewpoints you could examine. Assume you don't have time for all of them.

If that still doesn't help you see where I'm coming from, consider that reality is constantly changing and "the evolutionary process" usually happened in environments which no longer exist.

Comment author: Erfeyah 15 September 2017 03:57:58PM 1 point [-]

Cool. Peterson is much clearer than Jung (for which I don't have a clear opinion). I am not claiming that everything that Peterson says is correct and I agree with. I am pointing to his argument for the basis of morality in cultural transmission through imitation, rituals, myth, stories etc. and the grounding of these structures in the evolutionary process as the best rational explanation of morality I have come across. I have studied it in depth and I believe it to be correct. I am inviting engagement with the argument instead of biased rejection.

Comment author: hairyfigment 18 September 2017 08:11:19PM 0 points [-]

Without using terms such as "grounding" or "basis," what are you saying and why should I care?

Comment author: Erfeyah 18 September 2017 06:16:27PM 0 points [-]

Thanks for the pointer to the zombie sequence. I 've read part of it in the past and did not think it addressed the issue but I will revisit.

What about it seems worth refuting?

Well, the way it shows that you can not get consciousness from syntactic symbol manipulation. And Bayesian update is also a type of syntactic symbol manipulation so I am not clear why you are treating it differently. Are you sure you are not making the assumption that consciousness arises algorithmically to justify your conclusion and thus introduce circularity in your logic?

I don't know. Many people are rejecting the 'Chinese room' argument as naive but I haven't understood why yet so I am honestly open to the possibility that I am missing something.

Comment author: hairyfigment 18 September 2017 07:48:14PM 0 points [-]

I repeat: show that none of your neurons have consciousness separate from your own.

Why on Earth would you think Searle's argument shows anything, when you can't establish that you aren't a Chinese Gym? In order to even cast doubt on the idea that neurons are people, don't you need to rely on functionalism or a similar premise?

Comment author: Erfeyah 15 September 2017 09:24:01PM *  2 points [-]

I was wondering if someone can point me to good LW's article(s)/refutation(s) of Searle's Chinese room argument and consciousness in general. A search comes up with a lot of articles mentioning it but I assume it is addressed in some form in the sequences?

Comment author: hairyfigment 18 September 2017 06:01:42AM *  0 points [-]

What about it seems worth refuting?

The Zombie sequence may be related. (We'll see if I can actually link it here.) As far as the Chinese Room goes:

  • I think a necessary condition for consciousness is approximating a Bayesian update. So in the (ridiculous) version where the rules for speaking Chinese have no ability to learn, they also can't be conscious.
  • Searle talks about "understanding" Chinese. Now, the way I would interpret this word depends on context - that's how language works - but normally I'd incline towards a Bayesian interpretation of "understanding" as well. So this again might depend on something Searle left out of his scenario, though the question might not have a fixed meaning.
  • Some versions of the "Chinese Gym" have many people working together to implement the algorithm. Now, your neurons are all technically alive in one sense. I genuinely feel unsure how much consciousness a single neuron can have. If I decide to claim it's comparable to a man blindly following rules in a room, I don't think Searle could refute this. (I also don't think it makes sense to say one neuron alone can understand Chinese; neurologists, feel free to correct me.) So what is his argument supposed to be?
Comment author: DragonGod 16 September 2017 08:28:55PM 0 points [-]

I'm from Nigeria, and not conversant enough with American politics for the above to be meaningful to me. Please enlighten me?

Comment author: hairyfigment 16 September 2017 10:44:35PM 0 points [-]

Do you know what the Electoral College is? If so, see here:

The single most important reason that our model gave Trump a better chance than others is because of our assumption that polling errors are correlated.

Comment author: DragonGod 15 September 2017 12:58:58PM *  0 points [-]

The Conjunction Fallacy Fallacy

Conjunction Fallacy Fallacy: we should be wary of saying that the conjunction of two unlikely events must be much more unlikely the the exclusive occurrence of a a single one of those events, because sometimes the events are strongly connected.

Example

Has anyone seen other examples in the wild?

Comment author: hairyfigment 16 September 2017 07:35:47PM 0 points [-]

Arguably claims about Donald Trump winning enough states - but Nate Silver didn't assume independence, and his site still gave the outcome a low probability.

Comment author: ImmortalRationalist 20 July 2017 10:39:25AM 1 point [-]

Eliezer Yudkowsky wrote this article about the two things that rationalists need faith to believe in: That the statement "Induction works" has a sufficiently large prior probability, and that some single large ordinal that is well-ordered exists. Are there any ways to justify belief in either of these two things yet that do not require faith?

Comment author: hairyfigment 19 August 2017 10:41:39PM 0 points [-]

Not exactly. MIRI and others have research on logical uncertainty, which I would expect to eventually reduce the second premise to induction. I don't think we have a clear plan yet showing how we'll reach that level of practicality.

Justifying a not-super-exponentially-small prior probability for induction working feels like a category error. I guess we might get a kind of justification from better understanding Tegmark's Mathematical Macrocosm hypothesis - or, more likely, understanding why it fails. Such an argument will probably lack the intuitive force of 'Clearly the prior shouldn't be that low.'

Comment author: ImmortalRationalist 19 August 2017 02:11:10AM 0 points [-]

On a related question, if Unfriendly Artificial Intelligence is developed, how "unfriendly" is it expected to be? The most plausible sounding outcome may be human extinction. The worst case scenario could be if the UAI actively tortures humanity, but I can't think of many scenarios in which this would occur.

Comment author: hairyfigment 19 August 2017 10:31:06PM 0 points [-]

I would only expect the latter if we started with a human-like mind. A psychopath might care enough about humans to torture you; an uFAI not built to mimic us would just kill you, then use you for fuel and building material.

(Attempting to produce FAI should theoretically increase the probability by trying to make an AI care about humans. But this need not be a significant increase, and in fact MIRI seems well aware of the problem and keen to sniff out errors of this kind. In theory, an uFAI could decide to keep a few humans around for some reason - but not you. The chance of it wanting you in particular seems effectively nil.)

Comment author: SnowSage4444 18 March 2017 03:01:28PM 0 points [-]

No, really, what?

What "Different rules" could someone use to decide what to believe, besides "Because logic and science say so"? "Because my God said so"? "Because these tea leaves said so"?

Comment author: hairyfigment 20 March 2017 06:32:33PM 0 points [-]

Yes, but as it happens that kind of difference is unnecessary in the abstract. Besides the point I mentioned earlier, you could have a logical set of assumptions for "self-hating arithmetic" that proves arithmetic contradicts itself.

Completely unnecessary details here.

View more: Next