Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: MaryCh 26 February 2017 11:57:03AM 0 points [-]

BTW, this is about a different gland (thymus), but I still thought you might find it interesting; a history of research: https://link.springer.com/article/10.1007/BF01058991

Comment author: lukeprog 25 February 2017 09:44:11PM 0 points [-]

Fixed, thanks.

Comment author: TheAncientGeek 25 February 2017 07:09:45PM *  0 points [-]

Epistemic rationality isn’t about winning?

Demonstrated, context-appropriate epistemic rationality is incredibly valuable and should lead to higher status and -- to the extent that I understand Less-Wrong jargon --“winning.”

Valuable to whom? Value and status aren't universal constants.

You are pretty much saying that the knowledge can sometimes be instrumentally useful. But that does not show epistemic rationality is about winning..

The standard way to show that instrumental and epistemic rationality are not the same is to put forward a society where almost everyone holds to some delusory belief, such as a belief in Offler the Crocodile god, and awards status in return for devotion. In that circumstance, the instrumental rationalist will profess the false belief, and the epistemic rationalist will stick to the truth.

In a society that rewards the pursuit of knowledge for its own sake (which ours does sometimes), the epistemic rationalist will get rewards, but won't be pursuing knowledge in order to get rewards. If they stop getting the rewards they will still pursue knowledge...it is a terminal goal for them....that is the sense in which ER is not "about" winning and IR is.

Epistemic rationality is a tool.

ER is defined in terms of goals. The knowledge gained by it may be instrumentally useful, but that is not the central point.

Comment author: alexsloat 25 February 2017 04:37:36PM 0 points [-]

There's also sometimes an element of "I don't care about that stuff, I want you to deal with this thing over here instead" - e.g., "Don't worry about spelling, I'll clean that up later. What do you think of the plot?". Even if the criticism is correct, irrelevant criticism can reduce the relevant information available. This can actually make the bucket problem worse in some cases, such as if you spend so long editing spelling that you forget to talk about the things that they did right.

The best way to split someone else's buckets is often to give explicitly different comments on different parts of the bucket, to encourage a division in their head. You can even do that yourself, though it takes a lot more self-awareness.

In response to comment by RedMan on Crisis of Faith
Comment author: g_pepper 25 February 2017 02:45:54AM *  0 points [-]

So, the path of purposeful self-deception is not the road to higher rationality, no matter how well it happens to work.

Correct

To use the monkey riding on the tiger analogy for human cognition, I wonder which is more effective. The monkey putting the tiger in a pen and swinging through the trees alone...Or the monkey that ties a steak to a stick and rides the tiger.

I suspect the monkey is better off putting the tiger in a pen and swinging through the trees alone - with a steak and a stick it is just a matter of time before the monkey loses control of the situation and becomes a side dish to the steak. Similarly, trying to harness self-deception to lead one to truth/rationality is apt to backfire.

Comment author: Pablo_Stafforini 25 February 2017 01:50:10AM *  0 points [-]

Luke's post, based on this recommendation, reads as follows:

On economics, realitygrill recommends McAfee's Introduction to Economic Analysis over Mankiw's Macroeconomics and Case & Fair's Principles of Macroeconomics

I believe the books realitygrill is referring to are instead Mankiw's Principles of Microeconomics and Case & Fair's Principles of Microeconomics, since McAfee's is a microeconomics (not a macroeconomics) textbook.

Comment author: ragintumbleweed 25 February 2017 12:08:57AM 0 points [-]

Epistemic rationality isn’t about winning?

Demonstrated, context-appropriate epistemic rationality is incredibly valuable and should lead to higher status and -- to the extent that I understand Less-Wrong jargon --“winning.”

Think about markets: If you have accurate and non-consensus opinions about the values of assets or asset classes, you should be able to acquire great wealth. In that vein, there are plenty of rationalists who apply epistemic rationality to market opinions and do very well for themselves. Think Charlie Munger, Warren Buffett, Bill Gates, Peter Thiel, or Jeff Bezos. Winning!

If you know better than most who will win NBA games, you can make money betting on the games. E.g., Haralabos Voulgaris. Winning!

Know what health trends, diet trends, and exercise trends improve your chances for a longer life? Winning!

If you have an accurate and well-honed understanding of what pleases the crowd at Less Wrong, and you can articulate those points well, you’ll get Karma points and higher status in the community. Winning!

Economic markets, betting markets, health, and certain status-competitions are all contexts where epistemic rationality is potentially valuable.

Occasionally, however, epistemic rationality can be demonstrated in ways that are context-inappropriate – and thus lead to lower status. Not winning!

For example, if you correct someone’s grammar the first time you meet him or her at a cocktail party. Not winning!

Demonstrate that your boss is dead wrong in front of a group of peers in way that embarrasses her? Not winning!

Constantly argue about LW-type topics with people who don’t like to argue? Not winning!

Epistemic rationality is a tool. It gives you power to do things you couldn’t do otherwise. But status-games require a deft understanding of when it is appropriate and when it is not appropriate to demonstrate the greater coherence of one’s beliefs to reality to others (which itself strikes me as a form of epistemic rationality of social awareness). Those who get it right are the winners. Those who do not are the losers.

In response to comment by g_pepper on Crisis of Faith
Comment author: RedMan 24 February 2017 10:10:42PM 1 point [-]

Thank you! So, the path of purposeful self-deception is not the road to higher rationality, no matter how well it happens to work.

To use the monkey riding on the tiger analogy for human cognition, I wonder which is more effective. The monkey putting the tiger in a pen and swinging through the trees alone...Or the monkey that ties a steak to a stick and rides the tiger.

Comment author: alex_zag_al 24 February 2017 04:32:40PM 0 points [-]

There's an important category of choices: the ones where any good choice is "acting as if" something is true.

That is, there are two possible worlds. And there's one choice best if you knew you were in world 1, and another choice best if you knew you were in world 2. And, in addition, under any probabilistic mixture of the two worlds, one of those two choices is still optimal.

The hotel example falls into this category. So, one of the important reasons to recognize this category is to avoid a half-speed response to uncertainty.

Many choices don't fall into this category. You can tell because in many decision-making problems, gathering more information is a good decision. But, this is never acting as if you knew one of the possibilities for certain.

Arguably in your example, information-seeking actually was the best solution: pull over and take out a map or use a GPS.

It seems like another important category of choices is those where the best option is trying the world 1 choice for a specified amount of time and then trying the world 2 choice. Perhaps these are the choices where the best source of information is observing whether something works? Reminds me of two-armed bandit problems, where acting-as-if and investigating manifest in the same kind of choice (pulling a lever).

Comment author: hcutter 24 February 2017 01:28:59PM *  1 point [-]

Thank you for clarifying. I wasn't aware of those, and to be honest they seem a bit difficult to find information about via Less Wrong as a new reader. Meetups are publicized in the sidebar, but nothing about these dojos. Not even under the About section's extensive list of links. Which surprises me, if the creation of these dojos was a goal of Eliezer's from his very first blog post here.

If appeal to those who already care about rationality, followed by word of mouth advertising, is the approach that the dojos have decided to take rather than a more general appeal to the populace as part of raising the sanity waterline, then I concede the point.

In response to Semantic Stopsigns
Comment author: hcutter 24 February 2017 12:59:27PM 0 points [-]

In reading these Sequences, I am noting that it is sometimes difficult to tell when you are building on an older body of work and when you are unaware of the older body of work and are independently deriving an equivalent concept. Semantic stopsigns is a particularly good example of this. Are you aware of the existence of another term for this: the thought-terminating cliché? (Sometimes thought-stopping cliché.) There is some fascinating literature on the subject of their use in cults, which may be directly applicable to understanding Dark Side techniques. For example: Singer, Margaret Thaler, Maurice K. Temerlin, and Michael D. Langone. "Psychotherapy cults." Cultic Studies Journal 7.2 (1990): 101-125.

Comment author: TheAncientGeek 24 February 2017 12:35:05PM *  1 point [-]

If an author actually being X has no consequences apart from the professor believing that the author is "X", all consequences accrue to quoted beliefs and we have no reason to believe the unquoted form is meaningful or important.

No consequences meaning no consequences, or no consequences meaning no empirical testability? Consider replacing the vague and subjective predicate "Post Utopian" with the even more subjective "good". If a book is (believed to be) good or bad, that clearly has consequences, such as ones willingness to read it.

There are two consistent courses here: you can expand the notion of truth to include judgements of value and quality backed by handwavy on-empirical arguments; or you can keep a narrow, positivist notion of truth and abandon the use of handwaviness yourself. And you are not doing the latter because your arguments for MWI (to take just one example) are non-empirical handwaviness.

Comment author: TheAncientGeek 24 February 2017 12:25:27PM *  1 point [-]

It's actually called the Many Worlds INTERPRETATION, and what interpretation means in this case is specifically that there is not experimental test to distinguish it from interpretation. Theory=thing you can test, interpretation=thing you can't test. Indeed, EY's arguments for MWI are not empirical and are therefore his own version of Post Utopianism.

Comment author: peteolcott 23 February 2017 07:47:04PM *  0 points [-]

The above comment is the closest that I have ever found to the following Predicate Logic formalization:

“This sentence is not true.” ∃x ∈ finite strings from the alphabet of predicate logic ∃T ∈ Predicates ∃hasProperty ∈ Predicates | x = hasProperty(x, ~T(x))

Finite string x asserts that it has the property of the negation of the Boolean value result of evaluating predicate T with itself as T’s only argument.

The above is based on Tarski formal correctness of True: For all x, True(x) if and only if φ(x)

Copyright Pete Olcott 2016 ,2017

http://LiarParadox.org/

Comment author: ChristianKl 23 February 2017 07:08:51PM 0 points [-]

And how do you sell rationality training to them?

That's why you don't sell them a rationality workshop but a workshop for rationally thinking about AGI risks.

Comment author: alex_zag_al 23 February 2017 07:39:32AM 0 points [-]

the soft sciences have to deal with situations which never exactly repeat

This is also true of evolutionary biology--I think it's not widely recognized that evolutionary biology is like the soft sciences in this way.

Comment author: alex_zag_al 23 February 2017 06:09:55AM 0 points [-]

iii. Emphasize all rationality use cases evenly. Cause all people to be evenly targeted by CFAR workshops.

We can’t do this one either; we are too small to pursue all opportunities without horrible dilution and failure to capitalize on the most useful opportunities.

This surprised me, since I think of rationality as the general principles of truth-finding.

What have you found about the degree to which rationality instruction needs to be tailored to a use-case?

Comment author: alex_zag_al 23 February 2017 06:09:11AM *  0 points [-]

Several of these had the form “I, too, think that AI safety is incredibly important — and that is why I think CFAR should remain cause-neutral, so it can bring in more varied participants who might be made wary by an explicit focus on AI.”

I don't think that AI safety is important, which I guess makes me one of the "more varied participants made wary by an explicit focus on AI." Happy you're being explicit about your goals but I don't like them.

Comment author: alex_zag_al 23 February 2017 06:07:20AM 1 point [-]

Wow, I've read the story but I didn't quite realize the irony of it being a textbook (not a curriuculum, a textbook, right?) about judgment and decision making.

Comment author: ChristianKl 22 February 2017 08:05:37PM 1 point [-]

are you here referring to the Less Wrong meetup groups as "rationality dojos", or something else which has been created to fill this void since 2009?

I'm not referring to regular meetups.

CFAR started having a weekly event they called a dojo. Given that blueprint other cities have started a similar groups.

In Berlin we have a weekly dojo. Austrialia seems to have a montly dojo in Melbourne and in Sydney. Ohio also has a dojo: http://rationality-dojo.com/

I thought I had been very careful to draw a clear distinction between what such clubs are about and actual rationality, while still contending that the perception of the average person (the non-rationalist) is that they are the same

Okay. I should have been more clear. It doesn't matter what the average person thinks. A group doesn't need the average person to become a member to be successful. There just need to be enough people who care enough about the idea to become a member. If a group provides value to it's members and the members tell other people about it, it can grow.

View more: Next