Comment author: Sirducer 21 July 2009 07:00:21PM *  26 points [-]

As far as I can tell most people who dislike PUA techniques don't really understand them.

Most people here don't understand them because they have this model in their mind that if you treat an attractive woman nicely, try to respect her desires and needs, perhaps compliment her, with the internal attitude that women should be "respected" she will respond in kind by respecting your desire to have sex with her.

They never test this model by going to a bar and trying to use it to achieve the goal of sex with an attractive woman. I know this, because if they had tested it even 3 nights in a row, they would have discarded it as "broken". I would love to go out into the field with 10 guys from LessWrong and alicorn to coach them, and watch them get rejected time after time by attractive women.

I would write a top level post explaining the techniques, the PUA model of the generic male-female interaction, the predictions it makes, and how you can go out and collect experimental evidence to confirm or disconfirm those predictions, but I think that I would not get promoted (no matter how good the post was from a rational perspective, measured in bits of information it conveys about the world) and not get much karma, because people here just don't want to hear that truth.

Comment author: HA2 21 July 2009 08:30:51PM 0 points [-]

I suspect that efficiency is not necessarily the reason that many dislike PUA techniques. Personally, I don't particularly doubt that there are patterns for how women react to men (and vice versa), and that these can be used to have more sex. On the other hand, spiking people's drinks or getting them drunk can also be used for the same purpose, and that's commonly known as rape.

Sure, there are ways to hack into people's minds to get them to do what you want. The fact that they exist doesn't make them ethically acceptable.

Now, I don't know whether PUA methods are or aren't - but the fact that "the attitude that your partner should be respected" is seen as a negative thing seems to be pointing pretty clearly towards the no direction.

In response to Without models
Comment author: RichardKennaway 06 May 2009 07:26:59AM *  3 points [-]

A collective reply to comments so far.

All the posted answers to the exercises so far are correct.

1. Warming the thermostat with a candle will depress the room temperature while leaving the thermostat temperature constant.

2. Pressing the brake when the cruise control does not disengage will leave the car speed constant while the accelerator pedal goes down -- until something breaks.

3. The effect of raising a piece-rate worker's hourly rate will depend on what the worker wants (and not on what the employer intended to happen).

4. The doctor's target will be met while patients will still have to wait just as long, they just won't be able to book more than four weeks ahead. (This is an actual example from the British National Health Service.)

Does no-one want to tackle 5 or 6? Anyone who knows the derivative of exp(a t) knows enough to do number 6.

Thank you, kpreid, for linking to the very article that I knew, even while writing the original post, I would be invoking in response to the comments. Anyone who has not come across it before, please read it, and then I will talk about the concept that (it turns out) we are all talking about, when we talk about models, except for the curious fit that comes over some of us when contemplating the simple thermostat.

i77: As you say, the Smith predictor contains a model, and the subsystem C does not. Likewise the MRAC. In the PID case, the engineer has a model. But don't slide from that to attributing a model to the PID system. There isn't one there.

Vladimir_Nesov, pretty much all the concepts listed in the first three sections of that article are special cases of what is here meant by the word. As for the rest, I think we can all agree that we are not talking about a professional clothes horse or a village in Gmina Pacyna. I don't believe I have committed any of these offences (another article I'd recommend to anyone who has only just now had the good fortune to encounter it), but let those call foul who see any.

So, what are we talking about, when we talk about models? What I am talking about -- I'll come to the "we" part -- I said in a comment of mine on my first post:

What is a model? A model is a piece of mathematics in which certain quantities correspond to certain properties of the thing modelled, and certain mathematical relationships between these correspond to certain physical relationships.

and more briefly in the current post:

signals ... that are designed to relate to each other in the same way as do corresponding properties of the world outside

This is exactly what is meant by the word in model-based control theory. I linked to one paper where models in precisely this sense appear, and I am sure Google Books or Amazon will show the first chapters of any number of books on the subject, all using the word in exactly the same way. There is a definite thing here, and that is the thing I am talking of when I talk of a model.

This is not merely a term of art from within some branch of engineering, in which no-one outside it need be interested. Overcoming Bias has an excellent feature, a Google search box specialised to OB. When I search for "model", I get 523 hits. The first five (as I write -- I daresay the ranking may change from time to time) all use it in the above sense, some with less mathematical content but still with the essential feature of one thing being similar in structure to another, especially for the purpose of predicting how that other thing will behave. Here they are:

"So rather than your model for cognitive bias being an alternative model to self-deception..." (The model here is an extended analogy of the brain to a political bureaucracy.)

"Data-based model checking is a powerful tool for overcoming bias" (The writer is talking about statistical models, i.e. "a set of mathematical equations which describe the behavior of an object of study in terms of random variables and their associated probability distributions.")

"the model predicts much lower turnout than actually occurs" (The model is "the Rational Choice Model of Voting Participation, which is that people will vote if p times B > C".)

"I don't think student reports are a very good model for this kind of cognitive bias." (I.e. a system that behaves enough like another system to provide insight about that other.)

The 5th is a duplicate of the 2nd.

Those are enough examples to quote, but I inspected the rest of the first ten and sampled a few other hits at random (nos. 314, 159, 265, and 358, in fact), and except for a mention of a "role model", which could be arguable but not in any useful way, found no other senses in use.

When I googlesearch LW, excluding my own articles and the comments on them, the first two hits are to this, and this. These are also using the word in the same sense. The models are not as mathematical as they would have to be for engineering use, but they are otherwise of the same form: stuff here (the model) which is similar in structure to stuff there (the thing modelled), such that the model can be used to predict properties of the modelled.

In other words, what I am talking about, when I talk about models, is exactly what we on OB and LW are all talking about, when we talk about models, every time we talk about models. There is a definite thing here that has an easily understood shape in thingspace, we all call it a model, and to a sufficiently good approximation we call nothing else a model.

Until, strangely, we contemplate some very simple devices that reliably produce certain results, yet contain nothing drawn from that region of thingspace. Suddenly, instead of saying, "well well, no models here, fancy that", the definition of "model" is immediately changed to mean nothing more than mere entanglement, most explicitly by SilasBarta:

"A controller has a model (explicit or implicit) of it's environment iff there is mutual information between the controller and the environment."

Or the model in the designer's head is pointed to, and some sort of contagion invoked to attribute it to the thing he designed. No, this is butter spread over too much bread. That is not what is called a model anywhere on OB or LW except in these comment threads; it is not what is called a model, period.

You can consider the curvature of a bimetallic strip a model of the temperature if you like. It's a trivial model with one variable and no other structure, but there it is. However, a thermometer and a thermostat both have that model of the temperature, but only the thermostat controls it. You can also consider the thermostat's reference input to be a model of the position of the control dial, and the signal to the relay a model of the state of the relay, and the relay state a model of the heater state, but none of these trivial models explain the thermostat's functioning. What does explain the thermostat's functioning is the relation "turn on if below T1, turn off if above T2". That relation is not a model of anything. It is what the thermostat does; it does not map to anything else.

Exercise 7. How can you discover someone's goals? Assume you either cannot ask them, or would not trust their answers.

Comment author: HA2 06 May 2009 08:34:19PM 0 points [-]

"Exercise 7. How can you discover someone's goals? Assume you either cannot ask them, or would not trust their answers."

I'd guess that the best way is to observe what they actually do and figure out what goal they might be working towards from that.

That has the unfortunate consequence of automatically assuming that they're effective at reaching their goal, though. So you can't really use a goal that you've figured out in this way to estimate how good an agent is at getting to its goals.

And it has the unfortunate side effect of ascribing 'goals' to systems that are way too simple for that to be meaningful. You might as well say that the universe has a "goal" of maximizing its entropy. I'm not sure that it's meaningful to ascribe a "goal" to a thermostat - while it's a convenient way of describing what it does ("it wants to keep the temperature constant, that's all you need to know about it"), in a community of people who talk about AI I think it would require a bit more mental machinery before it could be said to have "goals".

In response to Without models
Comment author: HA2 06 May 2009 08:23:10PM 1 point [-]

"Now, I am not explaining control systems merely to explain control systems. The relevance to rationality is that they funnel reality into a narrow path in configuration space by entirely arational means, and thus constitute a proof by example that this is possible."

I don't think you needed control systems to show this. Gravity itself is as much of a 'control system' - it minimizes the potential energy of the system! Heck, if you're doing that, lots of laws of physics fit that definition - they narrow down the set of possible realities...

" This must raise the question, how much of the neural functioning of a living organism, human or lesser, operates by similar means? "

So, I'm still not sure what you mean by 'similar means'.

We know the broad overview of how brains work - sensory neurons get triggered, they trigger other neurons, and through some web of complex things motor neurons eventually get triggered to give outputs. The stuff in the middle is hard; some of it can be described as "memory" (patterns that somehow represent past inputs), some can be represented various other abstractions. Control systems are probably good ways of interpreting a lot of combinations of neurons, and some have been brought up here. It seems unlikely that they would capture all of them - but if you stretch the analogy enough, perhaps it can.

"And how much of the functioning of an artificial organism must be designed to use these means? "

Must? I'd guess absolutely none. The way you have described them, control systems are not basic - for 'future perception' to truly determine current actions, would break causality. So it's certainly possible to describe/build an artificial organism without using control systems, though it seems like it would be pretty inconvenient and pointlessly harder than an already impossible problem given how useful you're describing them to be.

In response to Return of the Survey
Comment author: HA2 06 May 2009 07:21:56PM 0 points [-]

The aliens question was interesting to think about.

I realized that if I put anything other than zero for 'probability of aliens existing within our galaxy', then it seems like it would make little sense to put anything other than 100 for 'observable universe', given how many galaxies there are! Unless our galaxy is somehow special...

Comment author: Annoyance 03 April 2009 06:36:02PM -2 points [-]

I have no interest in joining a church, period. It doesn't matter to me whether that church spouts theistic nonsense or humanistic nonsense. I'm interested in what groups teach and what they practice, not in their rituals or atmosphere.

Certainly a rationalist group could avail themselves of techniques that make people feel good about the group. But people who join the group for the sake of those feelings, or who wouldn't join if their feelings carefully massaged, aren't rationalists. Bringing those people into the fold can only distract us from what's important and dilute the message. Syncretism requires a sacrifice of the essential nature of at least one of the two incompatible things associated.

Comment author: HA2 03 April 2009 08:00:16PM 2 points [-]

And if you're interested in what groups do rather than how they do it, you're in a vast minority. Good for you - you don't have to join a church, even a rationalist one! Nobody's making you!

But people have emotions. It's not 'rational' to ignore this. As Eliezer says, and clarifies in the next post, rationalism [is/is correlated with/causes] winning. If the religious get to have a nice community and we have to do without, then we lose.

Yes, I would like to join a community of people very much like a church, but without all the religious nonsense. I'm pretty sure I'm not alone in this.

In response to comment by ciphergoth on Where are we?
Comment author: SoullessAutomaton 02 April 2009 10:51:33PM *  0 points [-]

Post in this thread if you live in the midwestern USA or nearby areas of Canada, ideally roughly within a day's drive of Chicago.

EDIT: For anyone in this area, Penguicon may be a good location for a meetup. It's a mixed sci-fi/open-source/general-geekery convention in the Detroit area, and just might possibly have at least one guest that LW readers would be interested to meet. I probably won't be there this year, though.

Comment author: HA2 03 April 2009 07:34:11PM 2 points [-]

Champaign, IL

Comment author: Mario 15 March 2009 05:58:55PM 3 points [-]

I get the feeling that the real problem here is repeatability. It's one thing to design a test for rationality, it's another to design a test that could not be gamed once the particulars are known. Since it probably isn't possible to control the flow of information in that way, the next-best option might be to design a test so that the testing criteria would not be understood except by those who pass.

I'm thinking of a test I heard about years ago. The teacher passes out the test, stressing to the students to read the instructions before beginning. The instructions specify that the answer to every question is C. The actual questions on the test don't matter, of course, but it's a great test of reading comprehension and the ability to follow instructions. Plus, the test is completely repeatable. All of the test questions could leak out, and still only those who deserve to pass would do so. If you are willing to assume that people who pass would not be willing to cheat (unlikely in this test, possible in a rationality test), then you would have an ungameable test.

A rationality test in this model might be one where an impossible task is given, and the correct response would be to not play.

Comment author: HA2 15 March 2009 09:01:00PM 3 points [-]

I don't think that it's reasonable to expect that secret criteria would stay secret once such a test would actually be used for anything. Sure, it could be kept a secret if there were a dozen people taking the test, of which the four who passed would get admitted to an exclusive club.

If there were ten thousand people taking the test, a thousand of which passed, I'd bet there'd be at least one who accidentally leaks it on the internet, from where it would immediately become public knowledge. (And at least a dozen who would willingly give up the answer if offered money for it, as would happen if there were anything at stake in this test.) It might work if such a test is obscure enough or not widely used, but not if it was used for anything that mattered to the test-takers and was open to many.