[POLL] Slutwalk
I recently heard about the upcoming event (or set of events) Slutwalk. I realize that this is somewhat political and may have some mind-killing effects, but my main interest is in the Less Wrong reaction to the idea. From the wikipedia page[1]:
The "Toronto Slut Walk" refers to a protest held on April 3, 2011 in Toronto. Protesters walked from Queen's Park (Toronto) to the Toronto Police Headquarters located on Central Street [1]. These protesters were dressed in revealing clothing and holding signs in order to reject the belief that female rape victims are "asking for it"[2]. They marched in response to remarks made by a Toronto police officer and judge. Women are also organizing other "slut walks" around Canada and the United States[3][4], including one scheduled for August 20th, 2011 in New York City[5].
Before continuing to read, please answer the poll below as to how you feel about the idea of the "Slutwalk."
I have many friends who are involved with the Slutwalk and my first impression is that it is a good idea; that framing and terminology, if not a strong part of policy decisions, can have large effects on personal wellbeing. Also that while dressing more modestly may have some effect on sexual assault, having an authority put any onus of a crime on a victim harshly reduces the disincentive for perpetrators.
On the other hand, I have been known to be clueless before in matters of activism, and I recall that Robin Hanson has made cutting remarks about protest being about attracting mates and making a show of identifying with groups, and this certainly seems like it could fit that description to a T. So I am curious what others' reactions are.
This is a political issue, and we all know politics is the mind-killer, so I would mostly like to see what people think of this idea; specifically whether it is controversial, heavily supported, or heavily disapproved of.
I will attempt to reformat if I can figure out how to work the formatting.
EDIT: Rephrased poll options and removed references to clusters, at popular request.
References:
Genes are overrated
This is hardly news, but this Guardian article reminded me of it - genes are really overrated, both among unwashed masses, and also here on Less Wrong.
I don't want to repeat things which have been said by so many before me, so I'll just link a lot.
Summary of evidence against genes being important:
- Almost no genes correlating with anything interesting been found. This is totally crushing evidence. If genes were important, Bayesian surprise of this lack of results would be in the land of impossible.
- Massive very fast changes of various supposedly highly hereditary characteristic with time in same populations. To name a few - Flynn effect, changes in people's height, obesity epidemic.
- Plenty of evidence of very large very reliable associations of various environmental factors with various important outcomes. For example unlike with genes and cancer where we get just noise, we know very well how much smoking increases chance of lung cancer.
Summary of evidence for genes being important:
- Some twin and adoption studies - which rely on very tiny highly atypical samples and a lot of statistical manipulation to get results they want. To make matters worse, results they got were wildly inconsistent.
And there's nothing more. Decades ago, before we had direct evidence of lack of correlation between genes and outcomes, it was excusable to believe genes matter a lot, even if it was never the best interpretation of data. Now it's just going against bulk of the evidence.
And in case you're wondering how could twin studies show high heredity when everything else says otherwise, I have two examples for you.
This one from a critique of twin studies by Kamin and Goldberger:
"A case in point is provided by the recent study of regular tobacco use among SATSA's twins (24). Heritability was estimated as 60% for men, only 20% for women. Separate analyses were then performed for three distinct age cohorts. For men, the heritability estimates were nearly identical for each cohort. But for women, heritability increased from zero for those born between 1910 and 1924, to 21% for those in the 1925-39 birth cohort, to 64% for the 1940-58 cohort. The authors suggested that the most plausible explanation for this finding was that "a reduction in the social restrictions on smoking in women in Sweden as the 20th century progressed permitted genetic factors increasing the risk for regular tobacco use to express themselves." If purportedly genetic factors can be so readily suppressed by social restrictions, one must ask the question, "For what conceivable purpose is the phenotypic variance being allocated?" This question is not addressed seriously by MISTRA or SATSA. The numbers, and the associated modeling, appear to be ends in themselves."
As the final nail in the coffin of heredity studies:
The Body-Mass Index of Twins Who Have Been Reared Apart
We conclude that genetic influences on body-mass index are substantial, whereas the childhood environment has little or no influence. These findings corroborate and extend the results of earlier studies of twins and adoptees. (N Engl J Med 1990; 322:1483–7.)
Or as paraphrased by a certain commenter on Marginal Revolution:
IOWs, the reason why white kids of today are much fatter than white kids of the 50s and 60s is due to genetic influences and environment has little or no influence
To summarize - heredity studies are pretty much totally worthless data manipulation. Once we accept that, all other evidence points for environment being extremely important, and genes mattering very little. We should accept that already.
wireless-heading, value drift and so on
A typical image of the wire-head is that of a guy with his brain connected via a wire thingy to a computer, living in a continuous state of pleasure, sort of like being drugged up for life.
What I mean by wireless heading-which is not such an elegant term but anyway- is the idea of little to no value drift. Clippy is usually brought up as a most dangerous AI that we should avoid creating at all costs, yet what's the point of creating copies of us and tile the universe with them? how is that different than what clippy does?
by 'us' I mean beings who share our intuitive understanding or can agree with us on things like morality or joy or not being bored etc.
Shouldn't we focus on engineered/controlled value drift rather than preventing it entirely? is that possible to program into an AI? somehow I don't think so. It seems to me that the whole premise of a single benevolent AI depends to a large extent on the similarity of basic human drives, supposedly we're so close to each other it's not a big deal to prevent value drift.
but once we get really close to the singularity all sorts of technologies will cause humanity to 'fracture' into so many different groups, that inevitably there will be some groups with what we might call 'alien minds', minds so different than most baseline humans as they are now that there wouldn't be much hope of convincing them to 'rejoin the fold' and not create an AI of their own. for all we know they might even have an easier time creating an AI that's friendly to them than it is for baseline humans to do the same, considering this a black swan event-or one that is impossible to predict when it will happen-what to do?
discuss.
Do people think in a Bayesian or Popperian way?
Scope Insensitivity - The human brain can't represent large quantities: an environmental measure that will save 200,000 birds doesn't conjure anywhere near a hundred times the emotional impact and willingness-to-pay of a measure that would save 2,000 birds.
Correspondence Bias, also known as the fundamental attribution error, refers to the tendency to attribute the behavior of others to intrinsic dispositions, while excusing one's own behavior as the result of circumstance.
Confirmation bias, or Positive Bias is the tendency to look for evidence that confirms a hypothesis, rather than disconfirming evidence.
Planning Fallacy - We tend to plan envisioning that everything will go as expected. Even assuming that such an estimate is accurate conditional on everything going as expected, things will not go as expected. As a result, we routinely see outcomes worse then the ex ante worst case scenario.
Do We Believe Everything We're Told? - Some experiments on priming suggest that mere exposure to a view is enough to get one to passively accept it, at least until it is specifically rejected.
Illusion of Transparency - Everyone knows what their own words mean, but experiments have confirmed that we systematically overestimate how much sense we are making to others.
Evaluability - It's difficult for humans to evaluate an option except in comparison to other options. Poor decisions result when a poor category for comparison is used. Includes an application for cheap gift-shopping.
The Allais Paradox (and subsequent followups) - Offered choices between gambles, people make decision-theoretically inconsistent decisions.
Profile of Eric Schadt
This article leaves me with a very mixed impression-- it's excessively gosh-wow, but it matches my beliefs that things are generally more complex than they look, and this is especially true about biology.
Schadt doesn't seem to have much web presence, which makes it harder to judge anything about what he's doing.
His background was very intellectually deprived-- he's ended up doing serious biology (or at least in high-status jobs in the field) through a combination of moderately good luck and extremely high drive. It leaves me wondering how much talent just gets lost.
ETA: I forgot to mention that the reason I posted is that if Schadt is right that biological systems are extremely complex, that it isn't feasible to develop drugs based on counteracting the effects of single genes, but that this complexity can be met by people doing networked science, then it's very important.
The null model of science
Jonah Lehrer wrote about the (surprising?) power of publication bias.
http://m.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer?currentPage=all
Cosma Shalizi (I think) said something, or pointed to something, about the null model of science - what science would look like if there were no actual effects, just statistical anomalies that look good at first. I can't find the reference, though.
Sleeping Beauty
Someone comes up to you and tells you he flipped ten coins for ten people. They were fair coins, but only three came up heads. What is the probability yours was heads?
There are three people of ten who got heads. There is a 30% chance that you're one of those three, right?
Now take the sleeping beauty paradox. A coin is flipped. If it lands on heads, the subject is woken twice. If it lands on tails, the subject is woken once. For simplicity, assume it happens exactly once, and there are one trillion person-days. You wake up groggy in the morning, and take a second to remember who you are.
If the coin landed on tails, that would mean that there is a one in a trillion chance that you will remember that you're the subject. If it was heads, it would be two in a trillion. As such, if you do remember being the subject, the probability that it's heads is P(H|U)=P(U|H)*P(H)/[P(U|H)+P(U|T)] = (2/trillion)*(1/2)/[(2/trillion+1/trillion)] =2/3, where H is coin lands on heads, T is coin lands on tails, and U is you are the subject.
Technically, it would be slightly less than 2/3, since there will be one more person-day if the coin lands on heads.
A Possible Solution to Parfit's Hitchiker
I had what appeared to me to be a bit of insight regarding trade between selfish agents. I disclose that I have not read TDT or any books on decision theory, so what I say may be blatantly incorrect. However, I judged that posting this here was of higher utility rather than waiting until I had read up on decision theory -- I have no intention of reading up on decision theory any time soon because I have more important (to me) things to do. This is not meant to deter criticism of the post itself -- please tell me why I'm wrong if I am. The following paragraph is primarily an introduction.
When a rational agent predicts that he is interacting with another rational agent and that the other agent has motive for deceiving him, (and both have a large amount of computing power), he will not use any emotional basis for ‘trust.’ Instead, he will see the other agent’s commitments as truth claims which may be true or false depending what action will optimize the other agent’s utility function at the time which the commitment is to be fulfilled. Agents which know something of the each other’s utility function may bargain directly on such terms, even when each of their utility functions are largely (or completely) dominated by selfishness.
This leads to a solution to Parfit’s hitchhiker, allowing selfish agents to precommit to future trade. Give Ekman all of your clothes and state that you will buy them back from him when you arrive with an amount higher than the worth of your clothes to him but lower than the worth of your clothes to yourself. Furthermore, tell him that because you don’t have anything more on you, he can’t get any more money off of you than an amount infinitesimally smaller than your clothes are worth to you, and accurately tell him how much your clothes are worth to yourself (you must tell the truth here due to his microexpression-reading capability.) He should judge your words as truth, considering that you have told the truth. Of course, you lose regardless if the value of your clothes to yourself is less than the utility he loses by taking you to town.
Assumptions made regarding Parfit's hitchhiker: 1. Physical assault is judged to be of very low utility by both agents and so isn't a factor in the problem. 2. Trades in the present time may be executed without prompting an infinite cycle of "No, you give me X first."
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)