All of Lonnen's Comments + Replies

Lonnen00

Morality is Temporary, Wisdom is Permanent.

-- Hunter S. Thompson

Lonnen10

No, the article specifically warns against using a single trait. It gives specific examples of how a single trait can mean very different things. It takes a cluster of traits to establish something useful.

If you want to pursue getting the data, though, you could try to derive something like a table of probabilities from a self scored 'Big Five' test, like the one in the appendix of this review paper. From that same review paper you can also find the papers and data sets that gave rise to five factor personality analysis.

edit: fixed the link.

Lonnen20

You might find something like this in market research. Certainly the sort of analysis that predicts which advertisements are relevant to a user on sites like Facebook would be similar to this. Trying to answer a question like "Which advertisement will the user be most receptive to given this cluster of traits?", where the traits are your likes / dislikes / music / etc.

This isn't exactly what you're asking for, but I doubt there is a P(personality type | trait) table anywhere. You're talking about a high-dimensional space and a single trait does not have much predictive power in isolation.

2sketerpot
If I had enough data points of people's personality traits, I could stick it in something like Weka, look for empirical clusters (using something like k-means or hierarchical clustering, and so forth), then train a number of classifiers to sort individual people into these clusters given a limited number of personality trait observations. There are all sorts of forms these classifiers could take. You could do the same sort of thing wedrifed is thinking of: assume that traits are independent and use the p(personality type | trait) values that have the most predictive power to classify a person. This would be a naive Bayes classifier, of the sort that's fantastically effective at spam filtering. If you wanted to make something simpler -- perhaps something you could print out as a handy pocket guide to classifying people -- you could use a decision tree. That's like a precomputed strategy for playing 20 questions, where you only ask questions whose answers pay rent. It's approximate, but it can work surprisingly well. A related method is to build several randomized decision trees and have them vote. Of course, once you build a classifier, that's a hypothesis about some structure in reality. You need to test that hypothesis before you rush forth and start putting your trust in it. For that, you can hold some of the data in reserve, and see how a classifier built from the rest of the data performs on it. If you break your data up into n groups and take turns letting each group be the testing data set, this can tell you if your general method for generating classifiers is working for this data set. Of course this is all terribly ad-hoc, but the Bayesian ideal approach is hard to compute here, and often these hacks work surprisingly well.
0wedrifid
And yet, this is exactly what personality tests must rely on and the sort of thing that we are doing when we follow the advice in the post. Access to even the raw data used when creating the 'big five' would be useful.
Lonnen30

I'm currently reading Thomas Schelling's Strategy of Conflict and it sounds like what you're looking for here. From this Google Books Link to the table of contents you can sample some chapters.

Lonnen20

That agrees with my intuitions. I had some series of ideas that ware developing around the idea that exploiting biases was sometimes necessary, and then I found:

Eliezer on Informers and Persuaders

I finally note, with regret, that in a world containing Persuaders, it may make sense for a second-order Informer to be deliberately eloquent if the issue has already been obscured by an eloquent Persuader - just exactly as elegant as the previous Persuader, no more, no less. It's a pity that this wonderful excuse exists, but in the real world, well...

It woul... (read more)

7wedrifid
I'm not sure where Eliezer got the 'just exactly as elegant as the previous Persuader, no more, no less" part from. That seems completely arbitrary. As though the universe somehow decrees that optimal informing strategies must be 'fair'.
Lonnen30

Lately I've been wondering if a rational agent can be expected to use the dark arts when dealing with irrational agents. For example: if a rational AI (not necessarily FAI) had to convince a human to cooperate with it, would it use rhetoric to leverage the human biases against it? Would a FAI?

4Dagon
Calling them "dark arts" is itself a tactic for framing that only affects the less-rational parts of our judgement. A purely rational agent will (the word "should" isn't necessary here) of course use rhetoric, outright lies, and other manipulations to get irrational agents to behave in ways that further it's goals. The question gets difficult when there are no rational agents involved. Humans, for instance, even those who want to be rational most of the time, are very bad at judging when they're wrong. For these irrational agents, it is good general advice not to lie or mislead anyone, at least if you have any significant uncertainty on the relative correctness of your positions on the given topic. Put another way, persistent disagreement indicates mutual contempt for each others' rationality. If the disagreement is resolvable, you don't need the dark arts. If you're considering the dark arts, it's purely out of contempt.
3cousin_it
Dark arts, huh? Sometime ago I put forward the following scenario: Bob wants to kill a kitten. The FAI wants to save the kitten because it's a good thing according to our CEV. So the FAI threatens Bob with 50 years of torture unless Bob lets the kitten go. The FAI has two distinct reasons why threatening Bob is okay: a) Bob will comply and there will be no need to torture him, b) the FAI is lying anyway. Expected utility reasoning says the FAI is doing the Right Thing. But do we want that? (Yes, this is yet another riff on consequentialism, deontologism and lying. Should FAIs follow deontological rules? For that matter, should humans?)
2wedrifid
Yes. Yes. (When we say 'rational agent' or 'rational AI' we are usually referring to "instrumental rationality". To a rational agent words are simply symbols to use to manipulate the environment. Speaking the truth, and even believing the truth are only loosely related concepts. Almost certainly, but this may depend somewhat on who exactly it is 'friendly' to and what that person's preferences happen to be.
Lonnen-10

Lately I've been wondering if a rational agent can be expected to use the dark arts when dealing with irrational agents. For example: if a rational AI (not necessarily FAI) had to convince a human to cooperate with it, would it use rhetoric to leverage the human biases against it? Would an FAI?

Lonnen60

Video taping may not be the preferred way to go about it, but there is something to be said for reflection. While you are unlikely to get better without practice, merely sinking time into conversation won't necessarily help, and may harm you. Without analyzing your attempts, even if it's only a brief list of what went well and what didn't, you may be practicing and learning bad habits. 100 ungraded math problems doesn't make you better at math, and 100 uncoached squats may injure you.

Take a few moments after conversations to assess at least what went well ... (read more)