All of SirBacon's Comments + Replies

"Society begins to appear much less unreasonable when one realizes its true function. It is there to help everyone to keep their minds off reality." Celia Green, The Human Evasion.

http://deoxy.org/evasion/4.htm

SirBacon-10

As for Newton's exact mental processes, they are lost to history, and we are not going to get very specific theories about them. Newton can only give us an outside view of the circumstances of discovery. His most important finds were made alone in his private home and outside of academic institutions. Eliezer left school early himself. Perhaps a common thread?

Teachers select strongly for IQ among students when they have power to choose their students. This might be a more powerful aggregator of high-IQ individuals than transmission from parents to children... (read more)

If academic lineages are due to an ability that teachers have to identify talent, this ability is extremely common and predicts achievement FAR better than IQ tests can. I am struck by the degree to which the financial world fails to identify talent with anything like similar reliability.

Also, the above theory is inconsistent with the extreme intellectual accomplishments of East Asians, and previously Jews, within European culture and failure of those same groups to produce similar intellectual accomplishments prior to such cultural admixture.

http://www.nytimes.com/2010/02/19/opinion/19brooks.html

Link is to David Brooks, an elite columnist for an elite paper, chiding "elites". He gets paid for this stuff, and is presumably read in earnest by millions of Americans.

SirBacon180

Irony is a means of simultaneously signalling and countersignalling.

By ironically obeying correct social forms, it is possible to receive status from conventional culture and counter-culture. The conventional culture does not want to admit that it is the butt of irony, and the counterculture likes people who score points off of the conventional culture. Is anyone aware of research into irony as a signalling strategy?

0[anonymous]
There are some interesting, or at least amusing, higher-order phenomena that follow from that. 1. The possibility of claiming to like something ironically, while actually liking it in earnest, in order to avoid rejection (or at least mockery) from some group or subculture that one identifies with. I don't hang out with a lot of people who are really countercultural or hipstery, but I'd probably still do that if I found myself honestly liking, for instance, a Miley Cyrus song. (Just an example. So far, I see little risk of that happening. :P) 2. The hypothetical class of people I've named "metahipsters". Ordinary hipsters pride themselves on having liked something before it became (perceived as) mainstream or after it had faded from the mainstream, or because it remains (perceived as) non-mainstream, etc. The metahipster prides him/herself on having liked things unpopular, pre-/post-popular, or unacceptable among hipsters. (I'm trying to bring top hats back. If I succeed, there'll probably be hipsters bragging about how they liked top hats before they got all mainstream, while I'll get to brag about how I liked top hats before they got all popular with hipsters. I have no idea what this signals. Feel free to psychoanalyze me.) 3. Related to both of the previous two items: I actually have some friends who follow a typical hipster aesthetic, but claim to be doing that only ironically. I still find it a bit hard to wrap my mind around that.

Saying X="I'm dropping out of school to join a doomsday cult" in a blatantly ironic way gives you the benefits of implying 'to unschooled eyes, it would appear that X - don't be unschooled' along with 'I'm sophisticated enough to be aware that certain aspects my decision look as if X' and 'I'm confident enough about my decision to make light of this', before finally concluding 'but of course, it's not actually true that X'.

Irony signals a lot.

3Eliezer Yudkowsky
Have you ever done this? Example?

You are right to be confused. The idea that the simulators would necessarily have human-like motives can only be justified on anthropocentric grounds - whatever is out there, it must be like us.

Anything capable of running us as a simulation might exist in any arbitrarily strange physical environment that allowed enough processing power for the job. There is no basis for the assumption that simulators would have humanly comprehensible motives or a similar physical environment.

The simulation problem requires that we think about our entire perceived universe as a single point in possible-universe-space, and it is not possible to extrapolate from this one point.

GDP per capita is a better predictor of fertility than access to contraceptives.

The rejection is only as flimsy as the contraceptive programs are effective, on the margins where increased funding might make a difference. They may not be very effective at all while additional children are still profitable.

"Socioeconomic development is considered the main cause of a decline over time in the benefits of having children and a rise in their costs."

"http://www.jstor.org/pss/20058399"

And when one goeth through fire for his teaching--what doth that prove? Verily, it is more when one's teaching cometh out of one's own burning!

-Friedrich Nietzsche, The Antichrist

I read this last year. It contained many of the important insights from ev. psych, especially in the area of mating strategies. It was far too wordy and long to justify its informational content. Robert Wright snagged most of the ideas from scientists, but he is a journalist, so he tends to mangle concepts and play up spurious "angles" of the "story." This was the most tedious thing I've read on the subject. Pinker is somewhat better.

There are many small daily problems I can't imagine addressing with math, and most people just cruise on intuition most of the time. Where we set the threshold for using math concepts seems to vary a lot with cognitive ability and our willingness to break out the graphing calculator when it might be of use.

It might be useful to lay down some psychological triggers so that we are reminded to be rational in situations where we too often operate intuitively. Conversely, a systematic account of things that are too trivial to rationalize and best left to our unconscious would be helpful. I'm not sure either sort of rule would be generalizable beyond the individual mind.

0SoullessAutomaton
This is only helpful if the subconscious reaction is reasonably good. Finding a way to improve the heuristics applied by the subconscious mind would be ideal for this type of thing.

If losing is a soul-crushing defeat to be avoided at all costs and winning is The Delicious Cake, not the icing, there is a much stronger incentive to win.

See the OB article on Lost Purposes- there's a distinct chance that a process optimized for fact-finding or interesting-fact gathering won't be optimized for winning. Sometimes our map needs to reflect the territory just enough for us to find the treasure.

In the real world where games have consequences, there is specialization, insofar as it is possible, in exploration and winning. Defense R&D is a... (read more)

0MrHen
Soul-crushing is a bummer but is technically optional. If you can train yourself away from soul-crushing it may make losing better for you. The same goes for winning: if winning holds no intrinsic value other than "Haha! I won!" I would argue that the incentives are purely emotional. This is not necessarily a bad thing, but I like to get more of my contests than feeling good about winning or feeling bad about losing. Personally, I get more emotional satisfaction from learning something new or cool. Here is a link to Lost Purposes for those who need one. I agree with your point. Most of what I talked about is only terribly relevant for contests such as board games where the rewards for winning are easily measured. When losing holds a significant loss factor such as a generation of children not learning science, playing to learn makes no sense and it is time to play to win. Agreed. The big gap in my article that I left out for brevity are examples of physical contests. If someone stabs me with a sword, I die. Playing to learn would make no sense in a sword fight and losing is most decidedly not good. (Edit) After thinking a little more, I would relate "Lost Purposes" to this article with the following question: "What is the purpose of winning?" Don't win just because you are supposed to win. Win because it has value.

"With that caveat, this summary and plenty of the posts contained within are damn useful!"

I resoundingly agree.

That said, Eliezer is attempting to leverage the sentiments we now call "altruistic" into efficient other-optimizing. What if all people are really after is warm fuzzies? Mightn't they then shrink from the prospect of optimally helping others?

Hobbes gives us several possible reasons for altruism, none of which seem to be conducive to effective helping:

"When the transferring of right is not mutual, but one of the parties tr... (read more)

1adamisom
"Mightn't" we shrink from optimal helping? "Might" charity be usually an imbalance of utilons? Yes, we might, it might. These are important considerations--I don't mean to denigrate clear thinking. But to lie content with hypothetical reasons why something wouldn't work, due to a common hidden laziness of most humans but which we can convince ourselves is due to more noble and reasonable reasons, is to completely miss the most crucial point of this entire Sequence: actually doing something, testing. I think it's safe to say that the natural inclination of most humans isn't initiating large projects with high but uncertain reward. It's to "just get by", a fact which I must thank you, good sir, for illustrating.... it was intentional, right?

"...then there's the idea that rationalists should be able to (a) solve group coordination problems, (b) care a lot about other people and (c) win..."

Why should rationalists necessarily care a lot about other people? If we are to avoid circular altruism and the nefarious effects of other-optimizing, the best amount of caring might be less than "a lot."

Additionally, caring about other people in the sense of seeking emotional gratification primarily in tribe-like social rituals may be truly inimical to dedicating one's life to theoretical... (read more)

4ryleah
Sorry to answer a 5 year old post, but apparently people read these things. You asked "Why should rationalists necessarily care a lot about other people," but all the post said was that they should be able to.
1astray
I don't think that b is necessarily an immediate entailment of rationality, but a condition that can be met simultaneously with a and c. The post presents a situation where c is satisficed only through a and b. (It does not take much finagling to suppose that a lonesome mountain man existence in a world ruled by barbarians is inferior in fuzziness and utilons relative to the expectation of the world where a b and c are held to be true.)
2cabalamat
They shouldn't, particularly. End goals are not a part of rationality, rationality exists to achieve them. However, many end goals can be more easily achieved by getting help from others. If your end goals are like this, it's rational for you to solve group coordination problems and care about other people.
0[anonymous]
Good point Bacon. I've been wondering where the implicit assumption that rational agents have an altruistic agenda came from. The assumption seems to permeate a rather large number of posts. When Omega offers to save lives, why do I care? To be perfectly honest, my own utility function suggests that those extra billions are a liability to my interests. When I realise that my altruistic notions are in conflict with my instinctive drive for status and influence, why do I "need to move in the direction of joining groups more easily, even in the face of annoyances and apparent unresponsiveness"? If anything it seems somewhat more rational to acknowledge the drive for status and self-interest as the key component and satisfy those criteria more effectively. This isn't to say I don't have an altruistic agenda that I pursue. It is just that I don't see that agenda itself as 'rational' at all. It is somewhere between merely arbitrary and 'slightly irrational'. With that caveat, this summary and plenty of the posts contained within are damn useful!

It might be prudent to avoid associating rationality with particular people or social institutions.

There's always the risk that particular instances of rationality will result in disaster, or that Bad Guys will be painstakingly rational, and in the early stages, wouldn't want to suffer the fate of religions, which often take reputation hits when their followers do nasty things.

Rationality could be advertised as a morally neutral instrumental value, i.e., Better Living Through Rationality.

On the other hand, we could sell rationality as a tool for atheists, drug policy activists, and stockbrokers, and publicly associate with their successes.

I would venture that emotivism can be a way of setting up short-run incentives for the achievement of sub-goals. If we think "Bayesian insights are good," we can derive some psychological satisfaction from things which, in themselves, do not have direct personal consequences.

By attaching "goodness" to things too far outside our feedback loops, like "ending hunger," we get things like counterproductive aid spending. By attaching "goodness" too strongly to subgoals close to individual feedback loops, like "publishing papers," we get a flood of inconsequential academic articles at the expense of general knowledge.

3SoullessAutomaton
This seems related to the tendency to gradually reify instrumental values as terminal values. e.g., "reading posts on Less Wrong helps me find better ways to accomplish my goals therefore is good" becomes "reading posts on Less Wrong is good, therefore it is a valid end goal in itself". Is that what you're getting at?