Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Qiaochu_Yuan 19 January 2017 04:56:12AM 0 points [-]

I would describe the problem as a combination of privileging the hypothesis and privileging the question. First, even granted that you want to both be consistent and help your friends, it's not clear that telling them about the seminar is the most helpful thing you can do for your friends; there are lots of other hypotheses you could try generating if you were given the time. Second, there are lots of other things you might want and do something about wanting, and someone's privileging the question by bringing these particular things to your attention in this particular way.

This objection applies pretty much verbatim to most things strangers might try to persuade you to do, e.g. donate money to their charity.

In response to comment by CCC on Crisis of Faith
Comment author: Jiro 18 January 2017 05:56:06PM *  1 point [-]

The predominant ways in which Christianity has spread are conversion by the sword, parent to child transmission, and social ostracism for people who refuse to believe it. It spreads for reasons related to its fitness as a system of ideas but unrelated to its factual truth. This is not how evolution spreads.

Also, distinguish between "anyone can claim X" and "anyone can correctly claim X". Creationists could claim that evolution spreads the same way--but they'd be wrong.

Comment author: Viliam 18 January 2017 03:36:46PM *  0 points [-]

There is a group (not CFAR) that allegedly uses the following tactics:

1) They teach their students (among other things) that consistency is good, and compartmentalization is bad and stupid.
2) They make the students admit explicitly that the seminar was useful for them.
3) They make the students admit explicitly that one of their important desires is to help their friends.
...and then...
4) They create strong pressure on the students to tell all their friends about the seminar, and to make them sign up for one.

The official reasoning is that if you want to be consistent, and if you want good things to happen to your friends, and if the seminar is a good thing... then logically you should want to make your friends attend the seminar. And if you want to make your friends attend the seminar, you should immediately take an action that increases the probability of that, especially if all it takes is to take your phone and make a few calls!

If there is anything stopping you, then you are inconsistent -- which means stupid! -- and you have failed at the essential lesson that was taught to you during the previous hours -- which means you will keep failing at life, because you are a comparmentalizing loser, and you can't stop being one even after the whole process was explained to you in a great detail, and you even paid a lot of money to learn this lesson! Come on, don't throw away everything, pick up the damned phone and start calling; it is not that difficult, and your first experience with overcoming compartmentalization will feel really great afterwards, trust me!

So, what exactly is wrong about this reasoning?

First, when someone says "A implies B", that doesn't mean you need to immediately jump and start doing B. There is still an option that A is false; and an option that "A implies B" is actually a lie. Or maybe "A implies B" only in some situation, or only with certain probability. Probabilistic thinking and paying attention to detail are not the opposite of consistency.

Second, just because something is good, it is not necessarily the best available option. Maybe you should spend some time thinking about even better options.

Third, there is a difference between trying to be consistent, and believing in your own infallibility. You are allowed to have probabilistic beliefs, and to admit openly that those beliefs are probabilistic. That you believe that with probability 80% A is true, but you also admit the possiblity that A is false. That is not an opposite of consistency. Furthermore, you are allowed to take an outside view, and admit that with certain probability you are wrong. That is especially important in calculating expected utility of actions that strongly depend on whether you are right or wrong.

Fourth, the most important consistency is internal. Just because you are internally consistent, it doesn't mean you have to explain all your beliefs truthfully and meaningfully to everyone, especially to people who are obviously trying to manipulate you.

...but if you learned about the concept of consistency just a few minutes ago, you probably don't realize all this.

In response to comment by Jiro on Crisis of Faith
Comment author: CCC 18 January 2017 10:45:17AM 2 points [-]

I think this fails in the case where the experts are infected by a meme plague.

Isn't this a Fully General Counterargument, though? Climate change deniers can claim that climate experts are 'infected by a meme plague'. Creationists can claim anyone who accepts evolution is 'infected by a meme plague'. So on and so forth.

In response to comment by arundelo on Crisis of Faith
Comment author: Jiro 18 January 2017 05:08:08AM 0 points [-]

[A]mateurs should not be voicing certitude in a matter still being debated by experts

I think this fails in the case where the experts are infected by a meme plague.

Comment author: Luke_A_Somers 16 January 2017 10:04:48PM 0 points [-]

Looking back over ancient posts, I saw this. I upvoted it earlier, and am leaving that, but I'd like to quibble with one thing:

this unnamed and possibly non-existent theory is less Kolmogorov-complex and hence more posterior-probable than the one that scientists are currently using

I think the bigger issue would be 'this unnamed and possibly non-existent theory is an accurate description of reality'. If it's more Kolmogorov-complex, so be it, that's the universe's prerogative. Increasing the Kolmogorov complexity decreases only our prior for it; it won't change whether it is the case.

Comment author: Luke_A_Somers 16 January 2017 09:22:42PM 0 points [-]

Yes, outcome. Look at what each word is actually describing. Irrationality is about process. Suboptimal is about outcome -- if you inefficiently but reliably calculate good strategies for action, that's being slow, not suboptimal in the way we're talking about, so it's not about process.

Comment author: hairyfigment 16 January 2017 06:53:26PM 0 points [-]

Outcome? I was going to say that suboptimal could refer to a case where we don't know if you'll reach your goal, but we can show (by common assumptions, let's say) that the action has lower expected value than some other. "Irrational" does not have such a precise technical meaning, though we often use it for more extreme suboptimality.

Comment author: moridinamael 16 January 2017 04:29:46PM 4 points [-]

Let me attempt to explain it in my own words.

You have a thought, and then you have some kind of emotional reaction to it, and that emotionally reaction should be felt in your body. Indeed, it is hard to have an emotion that doesn't have a physical component.

Say you think that you should call your mom, but then you feel a heaviness or a sinking in your gut, or a tightness in your neck or throat or jaw. These physical sensations are one of the main ways your subconscious tries to communicate with you. Let's further say that you don't know why you feel this way, and you can't say why you don't want to call your mom. You just find that you know you should call your mom but some part of you is giving you a really bad feeling about it. If you don't make an effort to untangle this mess, you'll probably just not call your mom, meaning whatever subconscious process originated those bad feelings in the first place will continue sitting under the surface and probably recapitulate the same reaction in similar situations.

If you gingerly try to "fit" the feeling with some words, as Gendlin says, the mind will either give you no feedback or it will give you a "yes, that's right" in the form of a further physical shift. This physical shift can be interpreted as the subconscious module acknowledging that its signal has been heard and ceasing to broadcast it.

I really don't think Gendlin is saying that the origin of your emotions about calling your mom is stored in your muscles. I think he's saying that when you have certain thoughts or parts of yourself that you have squashed out of consciousness with consistent suppression, these parts make themselves known through physical sensations, so it feels like it's in your body. And the best way to figure out what those feelings are is to be very attentive to your body, because that's the channel through which you're able to tentatively communicate with that part of yourself.

OR, it may not be that you did anything to suppress the thoughts, it may just be that the mind is structured in such a way that certain parts of the mind have no vocabulary with which to just inject a simple verbal thought into awareness. There's no reason a priori to assume that all parts of the mind have equal access to the phenological loop.

Maybe Gendlin's stuff is easier to swallow if you happen to already have this view of the conscious mind as the tip of the iceberg, with most of your beliefs and habits and thoughts being dominated by the vast but unreflective subconscious. If you get into meditation in any serious way, you can really consistently see that these unarticulated mental constructs are always lurking there, dominating behavior, pushing and pulling. To me, it's not woo at all, it's very concrete and actionable, but I understand that Gendlin's way of wording things may serve as a barrier to entry.

Comment author: Luke_A_Somers 16 January 2017 03:49:18PM 0 points [-]

That isn't the distinction I get between suboptimal and irrational. They're focused on different things.

Irrational to me would mean that the process by which the strategy was chosen was not one that would reliably yield good strategies in varying circumstances.

Suboptimal is just an outcome measurement.

Comment author: ksvanhorn 16 January 2017 03:43:00PM 0 points [-]

I'm reading Gendlin's book Focusing and struggling with it -- it's hard for me to understand why you and Anna think so highly of this book. It's hard to get past all the mystic woo about knowledge "in the body"; Gendlin seems to think that anything not in the conscious mind is somehow stored/processed out there in the muscles and bones. Even taking that as metaphorical -- which Gendlin clearly does not -- I find his description of the process very unclear.

In response to comment by Salemicus on Crisis of Faith
Comment author: Jade 16 January 2017 03:00:52AM *  1 point [-]

You left out the 'magical' part of my question. If magical beings exist(ed), then everything becomes more mysterious. That's partly why we don't pester JK Rowling about what extra-special boy Harry Potter was based on. We don't even suspect comic superheros like Batman, who has no magic, to have been based on a real-life billionaire. We certainly don't have scholars wasting time looking for evidence of 'the real Batman.' Modern stories of unlikely events are easily taken as imaginings, yet when people bucket a story as 'old/traditonal', for some people, that bucket includes 'characters must've been real persons', as if humans must've been too stupid to have imagination. https://en.wikipedia.org/wiki/Fakelore

In response to comment by CBHacking on LessWrong 2.0
Comment author: hairyfigment 15 January 2017 07:19:56PM 1 point [-]

Upvoted, but this seems to vary from person to person. You also forgot how italics and lists work here.

Comment author: CBHacking 15 January 2017 02:08:29PM *  2 points [-]

For what it's worth, I got relatively little* out of reading the Sequences solo, in any form (and RAZ is worse than LW in this regard, because the comments were worth something even on really old and inactive threads, and surprisingly many threads were still active when I first joined the site in 2014).

What really did the job for me was the reading group started by another then-Seattleite*. We started as a small group (I forget how many people the first meetings had, but it was a while before we broke 10 and longer before we did it regularly) that simply worked through the core sequences - Map & Territory, then How to Actually Change Your Mind - in order (as determined by posts on the sequences themselves at first, and later by the order of *Rationality: AI to Zombies chapters). Each week, we'd read the next 4-6 posts (generally adjusted for length) and then meet for roughly 90 minutes to talk about them in groups of 4-8 (as more people started coming, we began splitting up for the discussions). Then we'd (mostly) all go to dinner together, at which we'd talk about anything - the reading topics, other Rationality-esque things, or anything else a group of smart mostly-20-somethings might chat about - and next week we'd do it again.

If there's such a group near you, go to it! If not, try to get it started. Starting one of these groups is non-trivial. I was already considering the idea before I met the person who actually made it happen (and I met her through OKCupid, not LessWrong or the local rationality/EA community), but I wouldn't have done it anywhere near as well as she did. On the other hand, maybe you have the skills and connections (she did) and just need the encouragement. Or maybe you know somebody else who has what it takes, and need to go encourage them.

  • Reading the Sequences by myself, the concepts were very "slippery"; I might have technically remembered them, but I didn't internalize them. If there was anything I disagreed with or that seemed unrealistic - and this wasn't so very uncommon - it made me discount the whole post to effectively nothing. Even when something seemed totally, brilliantly true, it also felt untested to me, because I hadn't talked about it with anybody. Going to the group fixed all of that. While it's not really what you're asking for, you may find it does the trick.

    ** She has since moved to (of course) the Bay Area. Nonetheless, the group continues (and is roughly now two years running, hitting nearly every Monday evening). We regularly break 20 attendees now, occasionally break 30, and the "get dinner together" follow-up has grown into a regularly-scheduled weekly event in its own right at one of the local rationalist houses.

Comment author: TheAncientGeek 13 January 2017 04:06:06PM 1 point [-]

That's a fairly standard response which has been posed and answered several times in the comnents. For instance:

That's not the point: the point is why is it necessary, in the case of experience, and only in the case of experience to instantiate it in order to fully understand it. Obviously, it is true a that a descirption of a brain state won't put you into that brain state. But that doesn't show that there is nothing unusual about qualia. The problem is that there in no other case does it seem necessary to instantiate a brain state in otder to undertstand something.

If another version of Mary were shut up to learn everything about, say, nuclear fusion, the question "would she actually know about nuclear fusion" could only be answered "yes, of course....didn't you just say she knows everything"? The idea that she would have to instantiate a fusion reaction within her own body. That doesn't apply, any more than a description of photosynthesis will make you photosynthesise. We expect that the description of photosynthesis is complete, so that actually being able to photosynthesise would not add anything to our knowledge.

The list of things which the standard Mary's Room intution doesn't apply to is a long one. There seem to be some edge cases.: for instance, would an alternative Mary know everything about heart attacks without having one herself? Well, she would know everything except what a heart attack feels like —and what it feels like is a quale. the edge cases, like that one, are cases are just cases where an element of knowledge-by-acquaintance is needed for complete knowledge. Even other mental phenomena don't suffer from this peculiarity. Thoughts and memories are straightforwardly expressible in words — so long as they don't involve qualia.

So: is the response "well, she has never actually instantiated colour vision in her own brain" one that lays to rest and the challenge posed by the Knowledge argument, leaving physicalism undisturbed? The fact that these physicalists feel it would be in some way necessary to instantiate colour means they subscribe to the idea that there is something epistemically unique about qualia/experience, even if it resists the idea that qualia are metaphysically unique.

Is the assumtion of epistemological uniqueness to be expected given physicalism? Some argue that no matter how much you know about something "from the outside", you quite naturally wouldn't be expected to understand it from the inside. However, if physicalism is taken as the claim that everything ultimately has a possible physical explanation, that implies that everything has a description in 3rd person, objective language — that everything reduces to the 3rd person and the objective. What that means is that there can be no irreducible subjectivity: whilst brains may be able to generate subjective views, they must be utlimately reducible to objectivity along with everything else. Since Mary knows everything about how brains work, she must know how the trick is pulled off: she must be able to understand how and why and what kind of (apparent) subjetivity is produced by brains. So the Assumption of Epistemelogical Uniqueness does not cleanly rescue physicalism, for all that it is put forward by physcialists as something that is "just obvious".

Comment author: ColonelZen 13 January 2017 04:28:50AM 0 points [-]

I think I have long since "dissolved the problem quite elegantly" ...

http://antiphilosopher.com/wiki/index.php/Mary,_A_Philosopher%27s_Whore_of_Color_and_Her_Knowledge_Problem

Basically Yes Mary can "know all there is to know about color ...." before being exposed. Yes she does learn something new when exposed to color.

But basically the knowledge of how her brain reacts to color is information that does not exist in the universe prior to her exposure. She is physically changed and the new information is thus created.

If her knowledge were in essence complete, she might know beforehand EXACTLY how her own brain would change, what new neurons and synapses would developon exposure to color, how long it would take, etc..... But knowing what changes will happen inside your body doesn't make them happen any more than a diabetic can think away his disease if he happens to be a biochemist with utmost understandings of the mechanisms.

Information is ALAWAY manifest as physical structure and patterns. The information of Mary's color knowledge could not exist until she had exposure to color. (In a way out yonder sf world where scientists could effect the changes to her brain without exposing her to color, then no, in only that case would she NOT learn anything new and have no new learnings by exposure to color; in a world where exposure is the only way to create the linkage patterns in her brain - even if she knows what they will be - she does learn something new.)

I've a similar deconstruction of flying vermin.

The "knowledge argument" is dead.

*BTW I've never seen my particular arguments before, but was never a philosophy student as such. I don't claim that others haven't offered them first, possibly in less colorful terminology, but I'm damned if my arguments don't in fact destroy the two great "knowledge arguments". (Chalmer's zombies were fantasy from day one ... "modal logic" ... which one? ... aside there was never reason to believe in material possibility of philosophical zombies; absent material possibility there is nothing to argue about).

Comment author: pcm 12 January 2017 06:34:45PM 1 point [-]

I have a relevant blog post on models of willpower.

In response to comment by Salemicus on Crisis of Faith
Comment author: CCC 12 January 2017 05:19:42PM *  1 point [-]

Hmmm. To mess around with equations a bit... what can we say about P(Bunyan | stories about Bunyan) and P(!Bunyan | stories about Bunyan), given P(stories about Bunyan | Bunyan) > P(stories about Bunyan | !Bunyan)?

Let's genaralise it a bit (and reduce typing). What can we say about P(A|B) and P(!A|B) when P(B|A) > P(B|!A)?

Consider Bayes' Theorem: P(A|B) = [(P(B|A)*P(A)]/P(B). Thus, P(B) = [(P(B|A)*P(A)]/P(A|B)

Therefore, P(!A|B) = [(P(B|!A)*P(!A)]/P(B)

Now, P(!A) = 1-P(A). So:

P(!A|B) = [(P(B|!A)*{1-P(A)}]/P(B)

Solve for P(B):

P(B) = [(P(B|!A)*{1-P(A)}]/P(!A|B)

Since P(B) = [(P(B|A)*P(A)]/P(A|B):

[(P(B|A)*P(A)]/P(A|B) = [(P(B|!A)*{1-P(A)}]/P(!A|B)

Since P(B|A) > P(B|!A)

[(P(B|A)*P(A)]/P(A|B) > [(P(B|!A)*P(A)]/P(A|B)

Therefore:

[(P(B|!A)*{1-P(A)}]/P(!A|B) > [(P(B|!A)*P(A)]/P(A|B)

Since probabilities cannot be negative:

[{1-P(A)}]/P(!A|B) > [P(A)]/P(A|B)

.[1-P(A)]*P(A|B) > [P(A)]*P(!A|B)

...which means that either (1-P(A)) > P(A) or P(A|B) > P(!A|B), and quite possibly both; and whichever of these two inequalities is false (if either) the ratio between the two sides is closer than the inequality that is true.

To return to the original example; either P(Bunyan | stories about Bunyan) > P(!Bunyan | stories about Bunyan) OR P(!Bunyan) > P(Bunyan).

Also, if P(Bunyan | stories about Bunyan) > P(!Bunyan | stories about Bunyan) is false, then it must be true that P(Bunyan|stories about Bunyan) > P(Bunyan).

In response to comment by Jade on Crisis of Faith
Comment author: Salemicus 12 January 2017 11:36:08AM 0 points [-]

Would you say the origins of other religions become more mysterious if there never were whatever magical beings those religions posit?

Yes, of course.

The least mysterious explanation of Paul Bunyan stories is that there really was a Paul Bunyan. And the closer the real Paul Bunyan hews to the Bunyan of the stories, the smaller the mystery. P(stories about Bunyan | Bunyan) > P(stories about Bunyan | !Bunyan).

But just because a story is simple, doesn't necessarily make it likely. We can't conclude from the above that P(Bunyan | stories about Bunyan) > P(!Bunyan | stories about Bunyan).

Comment author: hairyfigment 11 January 2017 11:43:52PM 0 points [-]

Well, it's not easy.

View more: Next