All of SforSingularity's Comments + Replies

I was on Robert Wright's side towards the end of this debate when he claimed that there was a higher optimization process that created natural selection for a purpose.

The purpose of natural selection, fine-tuning of physical constants in our universe, and of countless other detailed coincidences (1) was to create me. (Or, for the readers of this comment, to create you)

The optimization process that optimized all these things is called anthropics. Its principle of operation is absurdly simple: you can't find yourself in a part of the universe that can't cr... (read more)

2zero_call
I don't think this is much of an insight, to be honest. The "anthropic" interpretation is a statement that the universe requires self-consistency. Which is, let's say, not surprising. My feeling is that this is a statement about the English language. This is not a statement about the universe.
0timtyler
There's also the possibility of "the adapted universe" idea - as laid out by Lee Smolin in "The Life of the Cosmos" and James Gardner in "Biocosm" and "Intelligent-Universe". Those ideas may face some Occam pruning - but they seem reasonably sensible. The laws of the universe show signs of being a complex adaptive system - and anthropic selection is not the only possible kind of selection effect that could be responsible for that. There could fairly easily be more to it than anthropic selection. Then there's Simulism... I go into the various possibilites in my "Viable Intelligent Design Hypotheses" essay: http://originoflife.net/intelligent_design/ Robert Wright has produced a broadly similar analysis elsewhere.
3RobinZ
That's not what "purpose" means.

1) if something very bad is about to happen to you, what's your credence that you're in a rescue sim and have nothing to fear?

I'd give that some credence, though note that we've talking about subjective anticipation, which is a piece of humanly-compelling nonsense.

5daedalus2u
For me, essentially zero, that is I would act (or attempt to act) as if I had zero credence that I was in a rescue sim.

However, if you approach them with a serious deal where some bias identified in the lab would lead them to accept unfavorable terms with real consequences, they won't trust their unreliable judgments, and instead they'll ask for third-party advice and see what the normal and usual way to handle such a situation is. If no such guidance is available, they'll fall back on the status quo heuristic. People hate to admit their intellectual limitations explicitly, but they're good at recognizing them instinctively before they get themselves into trouble by rely

... (read more)
7CronoDAS
"The market can stay irrational longer than you can stay solvent." - John Maynard Keynes

People hate to admit their intellectual limitations explicitly, but they're good at recognizing them instinctively before they get themselves into trouble by relying on their faulty reasoning too much.

Yeah... this is what Bryan Caplan says in The Myth of the Rational Voter

There is a point I am trying to make with this: the human race is a collective where the individual parts pretend to care about the whole, but actually don't care, and we (mostly) do this the insidious way, i.e. using lots of biased thinking. In fact most people even have themselves fooled, and this is an illusion that they're not keen on being disabused of.

The results... well, we'll see.

Look, maybe it does sound kooky, but people who really genuinely cared might at least invest more time in finding out how good its pedigree was. On the other hand, people who just wanted an excuse to ignore it would say "it's kooky, I'm going to ignore it".

But one could look at other cases, for example direct donation of money to the future (Robin has done this).

Or the relative lack of attention to more scientifically respectable existential risks, or even existential risks in general. (Human extinction risk, etc).

As you grow up, you start to see that the world is full of waste, injustice and bad incentives. You try frantically to tell people about this, and it always seems to go badly for you.

Then you grow up a bit more, get a bit wise, and realize that the mother-of-all-bad-incentives, the worst injustice, and the greatest meta-cause of waste ... is that people who point out such problems get punished, (especially) including pointing out this problem. If you are wise, you then become an initiate of the secret conspiracy of the successful.

Discuss.

5Mass_Driver
Telling people frantically about problems that are not on a very short list of "approved emergencies" like fire, angry mobs, and snakes is a good way to get people to ignore you, or, failing that, to dislike you. It is only very recently (in evolutionary time) that ordinary people are likely to find important solutions to important social problems in a context where those solutions have a realistic chance of being implemented. In the past, (a) people were relatively uneducated, (b) society was relatively simpler, and (c) arbitrary power was held and wielded relatively more openly. Thus, in the past, anyone who was talking frantically about social reform was either hopelessly naive, hopelessly insane, or hopelessly self-promoting. There's a reason we're hardwired to instinctively discount that kind of talk.
1Rain
You should present the easily implemented, obviously better solution at the same time as the problem. If the solution isn't easy to implement by the person you're talking to, then cost/benefit analysis may be in favor of the status quo or you might be talking to the wrong person. If the solution isn't obviously better, then it won't be very convincing as a solution or you might not have considered all opinions on the problem. And if there is no solution, then why complain?
0cousin_it
Is that true? 'Cause if it's true, I'd like to join.

Think about it in evolutionary terms. Roughly speaking, taking the action of attempting to kill someone is risky. An attractive female body is pretty much a guaranteed win for the genes concerned, so it's pointless taking risks. [Note: I just made this up, it might be wrong, but definitely look for an evo-psych explanation]

This explanation also accounts for the lower violent crime rate amongst women, since women are, from a gene's point of view, a low risk strategy, whereas violence is a risky business: you might win, but then again, you might die.

It would also predict, other things equal, lower crime rates amongst physically attractive men.

I had heard about the case casually on the news a few months ago. It was obvious to me that Amanda Knox was innocent. My probability estimate of guilt was around 1%. This makes me one of the few people in reasonably good agreement with Eli's conclusion.

I know almost nothing of the facts of the case.

I only saw a photo of Amanda Knox's face. Girls with cute smiles like that don't brutally murder people. I was horrified to see that among 300 posts on Less Wrong, only two mentioned this, and it was to urge people to ignore the photos. Are they all too PC or so... (read more)

4Jack
One of the comments about the photos was mine I believe. I tried to avoid the photos of both Knox and Kercher (though I failed spectacularly). The fact that Knox is pretty and has a cute smile is worth updating on, perhaps. But for me it would be better to be told those facts rather than figure them out by staring at pictures. Millions of years of evolution have made attractive girls my age more bias inducing than just about anything else in my life. For the lonely I imagine the effect is considerably more dramatic. Surely we don't think the men who wrote Knox letters telling her how beautiful they thought she was are seeing things clearly and objectively. And everyone is programmed to have their protection instincts kick in on the sight of a young, baby- like face (this is why the facial expression of fear resembles the face of a baby).
2komponisto
As far as I am aware, all we know about EY's number is that it is bounded from above by 15%. Since the average estimate was 35% (and that was before this post, after reading which some people said they updated downward, and no one said they updated upward), it's fair to say a lot of people were in reasonably good agreement with EY's conclusion. I don't know whether SfS's comment is to be taken as attempted satire or not, but I did wonder if a sort of "Spock bias" might result in reluctance to update on the sort of evidence presented here or here. As it turned out, that didn't seem to be so much of an issue here on LW (for all that character assassination of Amanda played a role in the larger public's perception). By far the biggest obstacle to arriving at probability estimates close to mine was that old chestnut: trusting in the fundamental sanity of one's fellow humans. (The jury must have known something we didn't, and surely Judge Micheli knew what he was doing...)
8Eliezer Yudkowsky
[citation needed]
6Alicorn
Via what mechanism does wholesome appearance and apple-cheekedness correlate with a disinclination to commit murder? For example, does a murderous disposition drain the blood from one's face? Or does having a cute smile prevent people from treating the person in such a way as to engender a murderous disposition from without? I wouldn't be exactly astonished to find a real, strong correlation between looking creepy and being dangerous. But I'd like to know how it works.

Yes, but you can manipulate whether the world getting saved had anything to do with you, and you can influence what kind of world you survive into.

If you make a low-probability, high reward bet that and really commit to donating the money to an X-risks organization, you may find yourself winning that bet more often than you would probabilistically expect.

In general, QI means that you care about the nature of your survival, but not whether you survive.

the singularity institute's budget grows much faster than linearly with cash. ... sunk all its income into triple-rollover lottery tickets

I had the same idea of buying very risky investments. Intuitively, it seems that world-saving probability is superlinear in cash. But I think that the intuition is probably incorrect, though I'll have to rethink now that someone else has had it.

Another advantage of buying triple rollover tickets is that if you adhere to quantum immortality plus the belief that uFAI reliably kills the world, then you'll win the lottery in all the worlds that you care about.

2wedrifid
If you had such an attitude then the lottery is irrelevant. You don't care what the 'world-saving probability' is so don't need to manipulate it.

I think that this is a great idea. I often find myself ending a debate with someone important and rational without the sense that our disagreement has been made explicit, and without a good reason for why we still disagree.

I suspect that if we imposed a norm on LW that said: every time two people disagree, they have to write down, at the end, why they disagree, we would do better.

6NancyLebovitz
Imposing a norm would add a lot to the effort involved in conversation. Every time you thought about engaging, you'd know you'd risk having to figure out a conclusion. This might or might not be a net win for signal to noise. Sometimes it takes quite a while to figure out what the actual issues are when new ideas are being explored. Instead of a norm requiring explicit conclusions, I recommend giving significant credit when they're achieved.
wedrifid100

I suspect that if we imposed a norm on LW that said: every time two people disagree, they have to write down, at the end, why they disagree, we would do better.

Unfortunately that is usually 'I said it all already and they just don't get it. They think all this crazy stuff instead.'

Just letting things go allows both to save face. This can increase the quality of discussion because it reduces the need to advocate strongly so you are the clear winner once both sides make their closing statements.

we disagree about what reply we would hear if we asked a friendly AI how to talk and think about morality in order to maximize human welfare as construed in most traditional utilitarian senses.

Surely you should both have large error bars around the answer to that question in the form of fairly wide probability distributions over the set of possible answers. If you're both well-calibrated rationalists those distributions should overlap a lot. Perhaps you should go talk to Greene? I vote for a bloggingheads.

-1wedrifid
Wouldn't that be 'advocate', 'propose' or 'suggest'?
1Eliezer Yudkowsky
Asked Greene, he was busy. Yes, it's possible that Greene is correct about what humanity ought to do at this point, but I think I know a bit more about his arguments than he does about mine...

people should do different things.

Whose version of "should" are you using in that sentence? If you're using the EY version of "should" then it is not possible for you and Greene to think people should do different things unless you and Greene anticipate different experimental results...

... since the EY version of "should" is (correct me if I am wrong) a long list of specific constraints and valuators that together define one specific utility function U humanmoralityaccordingtoEY. You can't disagree with Greene over what the concrete result of maximizing U humanmoralityaccordingtoEY is unless one of you is factually wrong.

3Eliezer Yudkowsky
Oh well in that case, we disagree about what reply we would hear if we asked a friendly AI how to talk and think about morality in order to maximize human welfare as construed in most traditional utilitarian senses. This is phrased as a different observable, but it represents more of a disagreement about impossible possible worlds than possible worlds - we disagree about statements with truth conditions of the type of mathematical truth, i.e. which conclusions are implied by which premises. Though we may also have some degree of empirical disagreement about what sort of talk and thought leads to which personal-hedonic results and which interpersonal-political results. (It's a good and clever question, though!)

Correct. I'm a moral cognitivist;

I think you're just using different words to say the same thing that Greene is saying, you in particular use "should" and "morally right" in a nonstandard way - but I don't really care about the particular way you formulate the correct position, just as I wouldn't care if you used the variable "x" where Greene used "y" in an integral.

You do agree that you and Greene are actually saying the same thing, yes?

3Eliezer Yudkowsky
I don't think we anticipate different experimental results. We do, however, seem to think that people should do different things.

Alicorn, I hereby award you 10 points. These are redeemable after the singularity for kudos, catgirls and other cool stuff.

3Jack
The word "catgirl" leaves me vaguely nauseous. I think this has something to do with the uncanny valley. Apologies for the tangent.

For example, having a goal of not going outside its box.

It would be nice if you could tell an AI not to affect anything outside its box.

10 points will be awarded to the first person who spots why "don't affect anything outside your box" is problematic.

0AngryParsley
There's a difference between "don't affect anything outside your box" and "don't go outside your box." My point is that we don't necessarily have to make FAI before anyone makes a self-improving AI. There are goal systems that, while not reflecting human values and goals, would still prevent an AI from destroying humanity.
4Alicorn
Such an AI wouldn't be able to interact with us, even verbally.

Great meetup; conversation was had about the probability of AI risk. Initially I thought that the probability of AI disaster was close to 5%, but speaking to Anna Salamon convinced me that it was more like 60%.

Also some discussion about what strategies to follow for AI friendliness.

1AngryParsley
I was also interested in the discussion on AI risk reduction strategies. Although SIAI espouses friendly AI, there hasn't been much thought about risk mitigation for possible unfriendly AIs. One example is the AI box. Although it is certainly not 100% effective, it's better than nothing (assuming it doesn't encourage people to run more UFAIs). Another would be to program an unfriendly AI with goals that would cause it to behave in a manner such that it does not destroy the world. For example, having a goal of not going outside its box. While the problem of friendly AI is hard enough to make people give up, I also think the problem of controlling unfriendly AI is hard enough to make some of the pro-FAI people do the same.

I'm traveling to the west coast especially for this. Hoping to see you all there.

ungrounded beliefs can be adopted voluntarily to an extent.

I cannot do this, and I don't understand anyone who can. If you consciously say "OK, it would be really nice to believe X, now I am going to try really hard to start believing it despite the evidence against it", then you already disbelieve X.

1DanArmak
I already disbelieve X, true, but I can change that. Of course it doesn't happen in a moment :-) Yes, you can't create that feeling of rational knowledge about X from nothing. But if you can retreat from rationality - to where most people live their lives - and if you repeat X often enough, and you have no strongly emotional reason not to believe X, and your family and peers and role models all profess X, and X behaves like a good in-group distinguishing mark - then I think you have a good chance of coming to believe X. The kind of belief associated with faith and sports team fandom. It's a little like the recent thread where someone, I forget who, described an (edit: hypothetical) religious guy who when drunk confessed that he didn't really believe in god and was only acting religious for the social benefits. Then people argued that no "really" religious person would honestly say that, and other people argued that even if he said that what does it mean if he honestly denies it whenever he's sober? In the end I subscribe to the "PR consciousness" theory that says consciousness functions to create and project a self-image that we want others to believe in. We consciously believe many things about ourselves that are completely at odds with how we actually behave and the goals we actually seek. So it would be surprising if we couldn't invoke these mechanisms in at least some circumstances.

Since we can imagine a continuous sequence of ever-better-Roombas, the notion of "has beliefs and values" seems to be a continuous one, rather than a discrete yes/no issue.

Does that have implication for self-awareness and consciousness?

Yes, I think so. One prominent hypothesis is that the reason that we evolved with consciousness is that there has to be some way for us to take an overview of the process of us, our goals, and the environment, and the way in which we think that our effort is producing achievement of goals. We need this so that we can do this whole "I am failing to achieve my goals?" check. Why this results in "experience" is not something I am going to attempt in this post.

As I said,

With the superRoomba, the pressure that the superRoomba applies to the environment doesn't vary as much with the kind of trick you play on it; it will eventually work out what changes you have made, and adapt its strategy so that you end up with a clean floor.

This criterion seems to separate an "inanimate" object like a hydrogen atom or a pebble bouncing around the world from a superRoomba.

0SilasBarta
Okay, so the criterion is the extent to which the mechanism screens off environment disturbances from the final result. You used this criterion interchangeably with the issue of whether: Does that have implication for self-awareness and consciousness?

See heavily edited comment above, good point.

Clearly these are two different things; the real question you are asking is in what relevant way are they different, right?

First of all, the Roomba does not "recognize" a wall as a reason to stop going forward. It gets some input from its front sensor, and then it turns to the right.

So what is the relevant difference between the Roomba that gets some input from its front sensor, and then it turns to the right., and the superRoomba that gets evidence from its wheels that it is cleaning the room, but entertains the hypothesis that maybe someone ha... (read more)

1SilasBarta
Uh oh, are we going to have to go over the debate about what a model is again?
0DanArmak
In your description there's indeed a big difference. But I'm pretty sure Alicorn hadn't intended such a superRoomba. As I understood her comment, she imagined a betterRoomba with, say, an extra sensor measuring force applied to its wheels. When it's in the air, it gets input from the sensor saying 'no force', and the betterRoomba stops trying to move. This doesn't imply beliefs & desires.

If, however, you programmed the Roomba not to interpret the input it gets from being in midair as an example of being in a room it should clean

then you would be building a beliefs/desires distinction into it.

0DanArmak
Why? How is this different from the Roomba recognizing a wall as a reason to stop going forward?

The difference is between the Roomba spinning and you working for nothing is that if you told the Roomba that it was just spinning its wheels, it wouldn't react. It has no concept of "I am failing to achieve my goals". You, on the other hand, would investigate; prod your environment to check if it was actually as you thought, and eventually you would update your beliefs and change your behaviors.

0SilasBarta
By the way, it seems like this exchange is re-treading my criticism of the concept of could/should/would agent: Since everything, even pebbles, has a workable decomposition into coulds and shoulds, when are they "really" separable? What isn't a CSA?
0DanArmak
(Edited & corrected) Here's a third example. Imagine an AI whose only supergoal is to gather information about something. It explicitly encodes this information, and everything else it knows, as a Bayesian network of beliefs. Its utility ultimately derives entirely from creating new (correct) beliefs. This AI's values and beliefs don't seem very separate to me. Every belief can be mapped to the value of having that belief. Values can be mapped to the belief(s) from whose creation or updating they derive. Every change in belief corresponds to a change in the AI's current utility, and vice versa. Given a subroutine fully implementing the AI's belief subsystem, the value system would be relatively simple, and vice versa. However, this doesn't imply the AI is in any sense simple or incapable of adaptation. Nor should it imply (though I'm no AI expert) that the AI is not a 'mind' or is not conscious. Similarly, while it's true that the Roomba doesn't have a belief/value separation, that's not related to the fact that it's a simple and stupid 'mind'.
2Alicorn
Roombas do not speak English. If, however, you programmed the Roomba not to interpret the input it gets from being in midair as an example of being in a room it should clean, then its behavior would change.

Would you claim the dog has no belief/value distinction?

Actually, I think I would. I think that pretty much all nonhuman animals would also don't really have the belief/value distinction.

I think that having a belief/values distinction requires being at least as sophisticated as a human. There are cases where a human sets a particular goal and then does things that are unpleasant in the short term (like working hard and not wasting all day commenting on blogs) in order to obtain a long-term valuable thing.

0DanArmak
In that case, why exactly do you think humans do have such a distinction? It's not enough to feel introspectively that the two are separate - we have lots of intuitive, introspective, objectively wrong feelings and perceptions. (Isn't there another bunch of comments dealing with this? I'll go look...) How do you define the relevant 'sophistication'? The ways in which one mind is "better" or smarter than another don't have a common ordering. There are ways in which human minds are less "sophisticated" than other minds - for instance, software programs are much better than me at memory, data organization and calculations.
4timtyler
Dogs value food, warmth and sex. They believe it is night outside. Much the same as humans, IOW.

An agent using UDT doesn't necessarily have a beliefs/values separation,

I am behind on your recent work on UDT; this fact comes as a shock to me. Can you provide a link to a post of yours/provide an example here making clear that UDT doesn't necessarily have a beliefs/values separation? Thanks.

3Wei Dai
Suppose I offer you three boxes and ask you to choose one. The first two are transparent, free, and contains an apple and an orange, respectively. The third is opaque, costs a penny, and contains either an apple or an orange, depending on a coin flip I made. Under expected utility maximization, there is no reason for you to choose the third box, regardless of your probability function and utility function. Under UDT1, you can choose the third box, by preferring to and as the outcomes of world programs P1 and P2. In that case, you can't be said to have a belief about whether the real world is P1 or P2.

One possible response here: We could consider simple optimizers like amoeba or Roomba vacuum cleaners as falling into the category: "mind without a clear belief/values distinction"; they definitely do a lot of signal processing and feature extraction and control theory, but they don't really have values. The Roomba would happily sit with wheels lifted off the ground thinking that it was cleaning a nonexistent room.

1timtyler
The purpose of a Roomba is to clean rooms. Clean rooms are what it behaves as though it "values" - whereas its "beliefs" would refer to things like whether it has just banged into a wall. There seems to be little problem in modelling the Roomba as an expected utility maximiser - though it is a rather trivial one.
1DanArmak
This happens because the Roomba can only handle a limited range of circumstances correctly - and this is true for any mind. It doesn't indicate anything about the Roomba's beliefs or belief/value separation. For instance, animals are great reproduction maximizers. A sterilized dog will keep trying to mate. Presumably the dog is thinking it's reproducing (Edit: not consciously thinking, but that's the intended goal of the adaptation it's executing), but really it's just spinning its metaphorical wheels uselessly. How is the dog different from the Roomba? Would you claim the dog has no belief/value distinction?
2Matt_Simpson
Isn't this just a case of the values the Roomba was designed to maximize being different from the values it actually maximizes? Consider the following: i.e. Roombas are program executers, not cleanliness maximizers. I suppose the counter is that humans don't have a clear belief/values distinction.

Is it possible that the dichotomy between beliefs and values is just an accidental byproduct of our evolution, perhaps a consequence of the specific environment that we’re adapted to, instead of a common feature of all rational minds?

In the normal usage, "mind" implies the existence of a distinction between beliefs and values. In the LW/OB usage, it implies that the mind is connected to some actuators and sensors which connect to an environment and is actually doing some optimization toward those values. Certainly "rational mind" ent... (read more)

1Wei Dai
An agent using UDT doesn't necessarily have a beliefs/values separation, but still has the properties of preferences and decision making. Or at least, it only has beliefs about mathematical facts, not about empirical facts. Maybe I should have made it clear that I was mainly talking about empirical beliefs in the post.
0SforSingularity
One possible response here: We could consider simple optimizers like amoeba or Roomba vacuum cleaners as falling into the category: "mind without a clear belief/values distinction"; they definitely do a lot of signal processing and feature extraction and control theory, but they don't really have values. The Roomba would happily sit with wheels lifted off the ground thinking that it was cleaning a nonexistent room.

Thought they nearly discovered my true identity....

The meetup has been good fun. Much conversing, coffee, and a restaurant meal.

1SforSingularity
Thought they nearly discovered my true identity....

It would be an evolutionary win to be interested in things that the other gender is interested in.

Why? I think that perhaps your reasoning is that you date someone based upon whether they have the same interests as you. But I suspect that this may be false - i.e. we confabulate shared interests as an explanation, where the real explanation is status or looks.

Upvoted. I came to exactly the same conclusion. Men are extremophiles, and in (7), Eliezer explained why.

As to Anna's point below, we should ask how much good can be expected to accumulate from trying to go against nature here, versus how difficult it will be. I.e. spending effort X on attracting more women to LW must be balanced against spending that same effort on something else.

If high intellectual curiosity is a rare trait in males and a very rare one in females, then given that you are here this doesn't surprise me. You are more intellectually curious than most of the men I have met, which is itself a high intellectual curiosity sample.

his group feels "cliquey". There are a lot of in-phrases and technical jargon

every incorrect comment is completely and utterly destroyed by multiple people.

These apply to both genders...

The obvious evolutionary psychology hypothesis behind the imbalanced gender ratio in the iconoclastic community is the idea that males are inherently more attracted to gambles that seem high-risk and high-reward; they are more driven to try out strange ideas that come with big promises, because the genetic payoff for an unusually successful male has a much higher upper bound than the genetic payoff for an unusually successful female. ... a difference as basic as "more male teenagers have a high cognitive temperature" could prove very hard to ad

... (read more)

psychology, yes, definitely. Bio, I do not know, but I would like to see what it looks like for evo psych.

Upvoted for a sensible analysis of the problem. Want girls? Go get them. My experience is that a common mistake amongst academically inclined people is to expect reality to reward them for doing the right thing - for example men on LW may (implicitly, without realizing that they are doing it) expect attractive, eligible women to be abundant in the risk-mitigation movement, because mitigating existential risks is the right* thing to do, and the universe is a just place which rewards good behavior.

The reality of the situation is that a male who spends time ... (read more)

wedrifid110

men on LW may (implicitly, without realizing that they are doing it) expect attractive, eligible women to be abundant in the risk-mitigation movement, because mitigating existential risks is the right* thing to do, and the universe is a just place which rewards good behavior.

Really? I find it hard to imagine that kind of naivety.

7Vladimir_Nesov
Well, there are these well-known concepts of unsupervised universe and mind projection fallacy...

(Also, if anybody knows or can estimate, are the gender ratios similar in the relevant areas of academia?)

All male biased as far as I know. (Math, philosophy, AI/CS)

3Jack
Aren't biology and psychology solidly balanced/ skewed female?

I assign a 99.9% probability to there being more male readers than female readers of LW, The most recent LW meetup that I attended had a gender ratio of roughly 20:1 male:female.

Males who feel that they are competing for a small pool of females will attempt to gain status over each other, diminishing the amount of honest, rational dialogue, and replacing it with oneupmanship.

Hence the idea of mixing LW - in its current state - with dating may not be good.

However, there is the possibility of re-framing LW it so that it appeals more to women. Perhaps we... (read more)

3Jack
If you haven't read Of Gender and Rationality and the accompanying comments lately it is worth a reread. There are so many hypotheses listed that we'd need another go-around with the specific goal of assigning probabilities to the most likely ones. It also looks like there were a number of popular proposals that were never acted upon. One or more of us needs to go through that thread and write a summary.
5ata
Is there a way to re-frame LW as being about "charitable sacrifice" without significantly straying the general goal of "refining the art of human rationality" (which may or may not be charitable/sacrificial)? What do you see as the essence of its current framing, and what is the evidence that women would respond better to the charitable-sacrifice frame? (Normally I'd respond to the quoted comment with "That's sexist nonsense" and leave it at that, but I am trying to be socratic about it.) (Also, if anybody knows or can estimate, are the gender ratios similar in the relevant areas of academia?)
Tiiba150

"I assign a 99.9% probability to there being more male readers than male readers of LW"

I expect that you have a VERY GOOD reason. As it is, I cannot help but disagree.

2MBlume
not good =/

a truly remarkable observation: quantum measure seems to behave in a way that would avoid this trilemma completely

Which is why Roger Penrose is so keen to show that consciousness is a quantum phenomenon.

"Singleton

A world government or superpower imposes a population control policy over the whole world."

  • it has to be stable essentially forever. It seems to me that no human government could achieve this, because of the randomness of human nature. Therefore, only an AI would suffice.
0Wei Dai
Not necessarily. A singleton could come to power, genetically engineer humanity with a fertility cap, then fall. In the long run, evolution will probably find a way around such a cap, but in the mean time another singleton could arise and reset the clock.

I've observed far more clannishness among children than political perspicuity

but what about the relative amounts in children vs adults?

0DanArmak
Of course children are more clannish than adults. But the "clan" of a child is that of its parents, not of its friends and peers. Adults can move to a new clan, band together to start a clan or sub-clan, replace or influence a clan's leadership. Children are pretty much powerless and are tied to their parents' clan. If anything ever really threatens that bond, I expect "clannishness" to completely override other priorities.

A priori we should expect children to be genuine knowledge seekers, because in our EEA there would have been facts of life (such as which plants we poisonous) that were important to know early on. Our EEA was probably sufficiently simple and unchanging that once you were an adult there were few new abstract facts to know.

This "story" explains why children ask adults awkward questions about politics, often displaying a wisdom apparently beyond their age. In reality, they just haven't traded in their curiosity for signalling yet.

At least, that is one possible hypothesis.

2Tyrrell_McAllister
But the greater vulnerability of children means that we should also expect them to be more clannish. They should be all the more eager to demonstrate their loyalty to a group, because they rely more on support from others to remain alive. I've observed far more clannishness among children than political perspicuity. I don't see that there's much displaying of "wisdom apparently beyond their age" in need of explanation.
5DanArmak
I do expect children to be knowledge seekers in a sense. When they see their parents avoid a plant, they learn to avoid it also. When they hear them say that binge drinkers should go to church more, they learn to say this also. In both cases it is the same behavior. The difference between our descriptions is that calling them "knowledge seekers" implies some kind of deliberate rationality, whereas they are really just executing the adaptation of copying their parents. Most children who repeat their parents' political views won't try to understand what the words actually mean, or check different sayings for consistency. Of course this is a generally good adaptation to have. Even if children had better innate rational skills and even if they could fact-check their parents' words, there's little benefit to a dependant child from ever disagreeing with its parent on politicized issues.

I do sometimes wonder what proportion of people who think about political matters are asking questions with genuine curiosity, versus engaging in praise for the idea that they and their group have gone into a happy death spiral about.

I suspect that those who ask with genuine curiosity are overwhelmingly chlidren.

EDIT: Others disagree that children are more genuinely curious. Perhaps it's just the nerds who ask genuine questions then?

3wedrifid
And what proportion of that genuine curiosity is an adaptation for gaining information and what proportion is an adaptation that encourages signalling a willingness to absorb the happy death.
1DanArmak
What makes you believe that? It's as good a theory that they're just trying to find out what Big Idea group they belong to so they can give the right answers / political suggestions when they grow up.

Great! now that we've both signalled our allegiance to the h+ ideology, would you like to mate with me!?

for an explanation of why I call this "Hansonian", see, for example, this. Hanson has lots of posts on how charity, ideology, etc is all about affiliating with a tribe and finding mates.

1eirenicon
Hansionain, twice? Really? As an aside, I love what you get when you google Hansonian. Most of the top results are in reference to Robin Hanson, and among my favorites are "Hansonian Normality", the "Hansonian world", and "Hansonian robot growth". (Un?)Fortunately, "Hansonian abduction" is attributed to a different Hanson. I wish my name was an adjective.
Load More