Comment author: Psy-Kosh 28 April 2009 03:45:07AM 0 points [-]

Sorry, I misread. I thought that was just a restating of the original concern. Mind rephrasing it? Thanks.

(note, however, that I'm talking about what "ideal rational agents that don't want to be stupid" do. As I indicated in the warning, trying to actually fully and completely translate a human's entire preferences to this is a highly nontrivial task)

Comment author: pangloss 28 April 2009 03:50:47AM 1 point [-]

I am thinking more like this: I am a scaredy-cat about roller coasters. So I prefer the tea cups to big thunder mountain rail road. And I maintain that preference after choosing the Tea Cups (I don't regret my decision). However, had I ridden Big Thunder Mountain Rail Road, I would have been able to appreciate that it is awesome, and would have preferred Big Thunder Mountain Rail Road to the Tea Cups.

Since this case seems pretty possible, if the sorts of lessons you are going to draw only apply to hyper-idealized agents who know all their preferences perfectly and whose preferences are stable over time, that is a good thing to note, since the lessons may not apply to those of us with dynamic preference sets.

Comment author: roland 28 April 2009 03:40:43AM 2 points [-]

I am not a psychologist, but I do know that our current plan (of, for example, thinking about the brainteaser cases), is definitely not the way to develop actual expertise.

So what do the studies you read say about the right way to develop verbal expertise?

Comment author: pangloss 28 April 2009 03:43:07AM 2 points [-]

From what I've read, one needs to train oneself on paradigm cases. So, for example, with wine tasting, you develop your verbal acuity by learning how to describe fairly ordinary wines.

I don't know how to port this strategy over to verbal acuity for rationality.

Comment author: Cyan 28 April 2009 03:36:18AM *  1 point [-]

Your example only shows that your preference ordering is:

"go to an Italian restaurant with friends" > "go to a Mexican restaurant with friends" > "ditch my friends and go to an Italian restaurant alone"

Comment author: pangloss 28 April 2009 03:41:13AM 0 points [-]

I agree, however, the definition of preferring A to B that he gave was choosing A over B (and if we don't specify that A and B must be total world-states, then it would turn out that I prefer Mexican to Italian because I chose Mexican over Italian). Psy-Kosh's comment above explains why that isn't what he meant.

Comment author: Psy-Kosh 28 April 2009 03:36:37AM 0 points [-]

Well, I was talking about total states. I guess that was at least one thing that I wasn't being clear on. But the preferences would basically be "universe in which I choose Mexican and my friends want Mexican" vs "universe in which I choose Italian and my friends want Mexican", etc...

Or did I misunderstand your objection?

Comment author: pangloss 28 April 2009 03:40:01AM 0 points [-]

That takes care of the first concern, but not necessarily the second one.

Comment author: MrHen 28 April 2009 03:28:09AM 0 points [-]

My claim isn't about rationality recognition per se, it is simply this: psychology has shown that verbalizing can screw us up when dealing with a process that isn't normally done verbally. And a lot (if not most) of our inferential processes are not done in this explicitly verbalized manner (verbalized doesn't necessarily mean spoken aloud, but just 'thinking through in words').

My claim is that there are known ways to get good at verbalizing non-verbal processes, and they involve training on paradigmatic cases. It is only after such training that one can start thinking about edge cases and the borderlands without worrying that the process of discussing the cases is corrupting their thinking about the cases.

Before we can advance rationality by discussion, we must first learn to discuss rationality.

Understood. Thanks for the clarification. Going back and rereading the article after these comments made a few more lights click on in my head.

So, where do we start?

Comment author: pangloss 28 April 2009 03:33:34AM 0 points [-]

I guess we find out how to acquire verbal expertise in a given domain, and do so for rationality, reasoning, and inference.

Comment author: pangloss 28 April 2009 03:17:29AM 0 points [-]

That's what it means to prefer something. That if you prefer A over B, you'd give up situation B to gain situation A. You want situation A more than you want situation B.

I don't want this to devolve into an argument about precisely how to talk about preferences, but I think this is a more substantive assumption that you are regarding it. If I prefer going to the Italian restaurant to going to the Mexican restaurant, I might still choose the Mexican restaurant over the Italian restaurant, because of the preferences of others.

It seems like you are also glossing over the importance of the possible difference between what I prefer when choosing to what I would have preferred had I chosen differently.

Comment author: MrHen 28 April 2009 02:52:47AM *  1 point [-]

Mmm, very good point. Strangely, now that I think about it, this sound very similar to the concept of the highest principle:

You may try to name the highest principle with names such as “the map that reflects the territory” or “experience of success and failure” or “Bayesian decision theory”. But perhaps you describe incorrectly the nameless virtue. How will you discover your mistake? Not by comparing your description to itself, but by comparing it to that which you did not name.

In the comparison between Rationality Recognition and Face Recognition, what is the Rationality Recognition equivalent of sight?

Comment author: pangloss 28 April 2009 03:06:01AM 4 points [-]

It depends. Sometimes it will be sight or our other senses, sometimes it will be memory, sometimes it will be testimony.

Thinks about it this way, we take in information all the time, and draw conclusions from it. "Sight" isn't playing a key role in face recognition except providing the data, you have a mental program for matching visual face data to previous visual face data, and that program gets screwed up if you start thinking through a description of the face after you see it.

Similarly, you see a room full of objects and events. You've got one or more "draw conclusions" programs that run on the data you see, and that program can get screwed up by putting things into words that you don't normally.

The data on insight puzzles shows that if you do manage to draw the right conclusions, and you try to put into words how you did it, you may get screwed up in the following way: you are confident in explanation A for how you drew the conclusion, when, in actuality, the truth is radically different explanation B.

My claim isn't about rationality recognition per se, it is simply this: psychology has shown that verbalizing can screw us up when dealing with a process that isn't normally done verbally. And a lot (if not most) of our inferential processes are not done in this explicitly verbalized manner (verbalized doesn't necessarily mean spoken aloud, but just 'thinking through in words').

My claim is that there are known ways to get good at verbalizing non-verbal processes, and they involve training on paradigmatic cases. It is only after such training that one can start thinking about edge cases and the borderlands without worrying that the process of discussing the cases is corrupting their thinking about the cases.

Before we can advance rationality by discussion, we must first learn to discuss rationality.

Comment author: MrHen 28 April 2009 01:17:12AM *  2 points [-]

(Edit) You can pretty much ignore this comment, but you should read pangloss' responses below. I found them enlightening.

I am not a psychologist, but I do know that our current plan (of, for example, thinking about the brainteaser cases), is definitely not the way to develop actual expertise.

Personally, I think there are two extremes at studying something like this. One is to work from the edge cases toward the general/common cases and the other is to work from the general/common cases toward the edges. Some gradients in the middle certainly apply.

My hunch is that our perceptions of the common cases have been painted over by too many other ways of life/thinking. Rationality can have trouble getting its point through the layers. The edge cases make more sense to flesh out because the edge cases are explicitly designed to force specific ways of life/thinking into choosing between one or more well-defined options.

Edge cases also have the advantage of Jargon. We can make up our own words to mean something that we were not able to express previously and, by definition, be correct. While this is less amusing than describing it in laymen's terms it certainly gets the problem defined. Working from the edges with new descriptions is a long process and one easy enough to mess up. Pigeonholing is a great risk and we can describing large swaths of general/common cases with a new word is dangerous.

General cases have the advantage that they can provide a foundation that will instantaneously apply to related cases. The network can build faster but the definitions may not be as clear-cut. A word used to describe wine can be used to describe faces (e.g. bitter) and people will be able to follow the conversation. The danger this time is generalizing terms and cases so that we are no longer able to distinguish between specific concepts.

All that being said, language is nifty. Learning how to work with it correctly would certainly be useful.

(Note) I am making a fine distinction between pigeonholing and generalizing that may exist only in my head. I see pigeonholing as taking large numbers of varied things and trying to separate them into ill-defined categories that force more variance than is really necessary. I see generalizing as taking large numbers of varied things and slopping them into categories that begin to erase the variances. If someone else knows better terms for this, please let me know.

Comment author: pangloss 28 April 2009 02:25:49AM 2 points [-]

I think the question about which cases to focus on when forming theories is different from the question of which cases to use to train oneself to verbalize one's thoughts without interfering with one's thinking. The latter requires us to train on paradigms, the former may be something we can pursue in either direction.

This is crucial: The thought isn't to presuppose which direction our theorizing should go, but rather to make sure that when we theorize, we aren't tripping ourselves up.

Verbal Overshadowing and The Art of Rationality

63 pangloss 27 April 2009 11:39PM

To begin, here are some Fun Psychology Facts:  

People who were asked to describe a face after seeing it are worse at recognizing the same face later.

People who are asked to describe a wine after drinking it are worse at recognizing the same wine later.

People who are asked to give reasons for their preferences among a collection of jellies are worse at identifying their own preferences among those jellies.

 

This effect, known as Verbal Overshadowing, occurs primarily when a principally non-verbal process is disrupted by a task which involves verbalization.  The above generalizations (and Verbal Overshadowing effects more generally), do not occur among what we can term "Verbal Experts": individuals who are as good at verbalizing the relevant process as they are at doing it implicitly or automatically.  This seems like it will be very important to keep in mind when cultivating our own Rationality.

continue reading »
Comment author: pangloss 27 April 2009 07:03:12PM 0 points [-]

The Verbal Overshadowing effect, and how to train yourself to be a good explicit reasoner.

View more: Prev | Next