[POLL] Slutwalk

-9 magfrump 08 May 2011 07:00AM

I recently heard about the upcoming event (or set of events) Slutwalk.  I realize that this is somewhat political and may have some mind-killing effects, but my main interest is in the Less Wrong reaction to the idea.  From the wikipedia page[1]:

The "Toronto Slut Walk" refers to a protest held on April 3, 2011 in Toronto. Protesters walked from Queen's Park (Toronto) to the Toronto Police Headquarters located on Central Street [1]. These protesters were dressed in revealing clothing and holding signs in order to reject the belief that female rape victims are "asking for it"[2]. They marched in response to remarks made by a Toronto police officer and judge. Women are also organizing other "slut walks" around Canada and the United States[3][4], including one scheduled for August 20th, 2011 in New York City[5].

Before continuing to read, please answer the poll below as to how you feel about the idea of the "Slutwalk."

 

I have many friends who are involved with the Slutwalk and my first impression is that it is a good idea; that framing and terminology, if not a strong part of policy decisions, can have large effects on personal wellbeing.  Also that while dressing more modestly may have some effect on sexual assault, having an authority put any onus of a crime on a victim harshly reduces the disincentive for perpetrators.

On the other hand, I have been known to be clueless before in matters of activism, and I recall that Robin Hanson has made cutting remarks about protest being about attracting mates and making a show of identifying with groups, and this certainly seems like it could fit that description to a T.  So I am curious what others' reactions are.

This is a political issue, and we all know politics is the mind-killer, so I would mostly like to see what people think of this idea; specifically whether it is controversial, heavily supported, or heavily disapproved of.

I will attempt to reformat if I can figure out how to work the formatting.

EDIT: Rephrased poll options and removed references to clusters, at popular request.

References:

[1]: http://en.wikipedia.org/wiki/Toronto_Slutwalk

Xtranormal: Nano and AI Danger

-2 lukeprog 26 April 2011 07:01PM

Here. One of those videos generated automatically from a text. Also see.

Genes are overrated

-11 taw 20 April 2011 12:03AM

This is hardly news, but this Guardian article reminded me of it - genes are really overrated, both among unwashed masses, and also here on Less Wrong.

I don't want to repeat things which have been said by so many before me, so I'll just link a lot.

Summary of evidence against genes being important:

  • Almost no genes correlating with anything interesting been found. This is totally crushing evidence. If genes were important, Bayesian surprise of this lack of results would be in the land of impossible.
  • Massive very fast changes of various supposedly highly hereditary characteristic with time in same populations. To name a few - Flynn effect, changes in people's height, obesity epidemic.
  • Plenty of evidence of very large very reliable associations of various environmental factors with various important outcomes. For example unlike with genes and cancer where we get just noise, we know very well how much smoking increases chance of lung cancer.

Summary of evidence for genes being important:

  • Some twin and adoption studies - which rely on very tiny highly atypical samples and a lot of statistical manipulation to get results they want. To make matters worse, results they got were wildly inconsistent.

And there's nothing more. Decades ago, before we had direct evidence of lack of correlation between genes and outcomes, it was excusable to believe genes matter a lot, even if it was never the best interpretation of data. Now it's just going against bulk of the evidence.

And in case you're wondering how could twin studies show high heredity when everything else says otherwise, I have two examples for you.

This one from a critique of twin studies by Kamin and Goldberger:

"A case in point is provided by the recent study of regular tobacco use among SATSA's twins (24). Heritability was estimated as 60% for men, only 20% for women. Separate analyses were then performed for three distinct age cohorts. For men, the heritability estimates were nearly identical for each cohort. But for women, heritability increased from zero for those born between 1910 and 1924, to 21% for those in the 1925-39 birth cohort, to 64% for the 1940-58 cohort. The authors suggested that the most plausible explanation for this finding was that "a reduction in the social restrictions on smoking in women in Sweden as the 20th century progressed permitted genetic factors increasing the risk for regular tobacco use to express themselves." If purportedly genetic factors can be so readily suppressed by social restrictions, one must ask the question, "For what conceivable purpose is the phenotypic variance being allocated?" This question is not addressed seriously by MISTRA or SATSA. The numbers, and the associated modeling, appear to be ends in themselves."

As the final nail in the coffin of heredity studies:

The Body-Mass Index of Twins Who Have Been Reared Apart

We conclude that genetic influences on body-mass index are substantial, whereas the childhood environment has little or no influence. These findings corroborate and extend the results of earlier studies of twins and adoptees. (N Engl J Med 1990; 322:1483–7.)

Or as paraphrased by a certain commenter on Marginal Revolution:

IOWs, the reason why white kids of today are much fatter than white kids of the 50s and 60s is due to genetic influences and environment has little or no influence

To summarize - heredity studies are pretty much totally worthless data manipulation. Once we accept that, all other evidence points for environment being extremely important, and genes mattering very little. We should accept that already.

wireless-heading, value drift and so on

-3 h-H 16 April 2011 06:45AM

A typical image of the wire-head is that of a guy with his brain connected via a wire thingy to a computer, living in a continuous state of pleasure, sort of like being drugged up for life.

What I mean by wireless heading-which is not such an elegant term but anyway- is the idea of little to no value drift. Clippy is usually brought up as a most dangerous AI that we should avoid creating at all costs, yet what's the point of creating copies of us and tile the universe with them? how is that different than what clippy does?

by 'us' I mean beings who share our intuitive understanding or can agree with us on things like morality or joy or not being bored etc.

Shouldn't we focus on engineered/controlled value drift rather than preventing it entirely? is that possible to program into an AI? somehow I don't think so. It seems to me that the whole premise of a single benevolent AI depends to a large extent on the similarity of basic human drives, supposedly we're so close to each other it's not a big deal to prevent value drift.

but once we get really close to the singularity all sorts of technologies will cause humanity to 'fracture' into so many different groups, that inevitably there will be some groups with what we might call 'alien minds', minds so different than most baseline humans as they are now that there wouldn't be much hope of convincing them to 'rejoin the fold' and not create an AI of their own. for all we know they might even have an easier time creating an AI that's friendly to them than it is for baseline humans to do the same, considering this a black swan event-or one that is impossible to predict when it will happen-what to do?

discuss.

Do people think in a Bayesian or Popperian way?

-22 curi 10 April 2011 10:18AM
People think A&B is more likely than A alone, if you ask the right question. That's not very Bayesian; as far as you Bayesians can tell it's really quite stupid.
Is that maybe evidence that Bayesianism is faililng to model how people actually thinking?
Popperian philosophy can make sense of this (without hating on everyone! it's not good to hate on people when there's better options available). It explains it like this: people like explanations. When you say "A happened because B happened" it sounds to them like a pretty good explanatory theory which makes sense. When you say "A alone" they don't see any explanation and they read it as "A happened for no apparent reason" which is a bad explanation, so they score it worse.
To concretize this, you could use A = economic collapse and B = nuclear war.
People are looking for good explanations. They are thinking in a Popperian fashion.
Isn't it weird how you guys talk about all these biases which basically consist of people not thinking in the way you think they should, but when someone says "hey, actually they think in this way Popper worked out" you think that's crazy cause the Bayesian model must be correct? Why did you find all these counter examples to your own theory and then never notice they mean your theory is wrong? In the cases where people don't think in a Popperian way, Popper explains why (mostly b/c of the justificationist tradition informing many mistakes since Aristotle)
Scope Insensitivity - The human brain can't represent large quantities: an environmental measure that will save 200,000 birds doesn't conjure anywhere near a hundred times the emotional impact and willingness-to-pay of a measure that would save 2,000 birds.
Changing the number does not change most of the explanations involved, such as why helping birds is good, what the person can afford to spare, how much charity it takes the person to feel altruistic enough (or moral enough, involved enough, helpful enough, whatever), etc... Since the major explanatory factors they were considering don't change in proportion to the number of birds, their answer doesn't change proportionally either.
Correspondence Bias, also known as the fundamental attribution error, refers to the tendency to attribute the behavior of others to intrinsic dispositions, while excusing one's own behavior as the result of circumstance.
This happens because people usually know the explanations/excuses for why they did stuff, but they don't know them for others. And they have more reason to think of them for themselves.
Confirmation bias, or Positive Bias is the tendency to look for evidence that confirms a hypothesis, rather than disconfirming evidence.
People do this because of the justificationist tradition, dating back to Aristotle, which Bayesian epistemology is part of, and which Popper rejected. This is a way people really don't think in the Popperian way -- but they could and should.
Planning Fallacy - We tend to plan envisioning that everything will go as expected. Even assuming that such an estimate is accurate conditional on everything going as expected, things will not go as expected. As a result, we routinely see outcomes worse then the ex ante worst case scenario.
This is also caused by the justificationist tradition, which Bayesian epistemology is part of. It's not fallibilist enough. This is a way people really don't think in the Popperian way -- but they could and should.
Well, that's part of the issue. The other part is they come up with a good explanation of what will happen, and they go with that. That part of their thinking fits what Popper said people do. The problem is not enough criticism, which is from the popularity of justificationism.
Do We Believe Everything We're Told? - Some experiments on priming suggest that mere exposure to a view is enough to get one to passively accept it, at least until it is specifically rejected.
That's very Popperian. The Popperian way is that you can make conjectures however you want, and you only reject them if there's a criticism. No criticism, no rejection. This contrasts with the justificationist approach in which ideas are required to (impossibly) have positive support, and the focus is on positive support not criticism (thus causing, e.g., Confirmation Bias)
Illusion of Transparency - Everyone knows what their own words mean, but experiments have confirmed that we systematically overestimate how much sense we are making to others.
This one is off topic but there's several things I wanted to say. First, people don't always know what their own words mean. People talking about tricky concepts like God, qualia, or consciousness often can't explain what they mean by the words if asked. Sometimes people even use words without knowing the definition, they just heard it in a similar circumstance another time or something.
The reason others don't understand us, often, is because of the nature of communication. To communicate what has to happen is the other person creates knoweldge of what idea(s) you are trying to express to him. That means he has to make guesses about what you are saying and use criticisms to improve those guesses (e.g. by ruling stuff out incompatible with the words he heard you use). In this way Popperian epistemology lets us understand communication, and why it's so hard.
Evaluability - It's difficult for humans to evaluate an option except in comparison to other options. Poor decisions result when a poor category for comparison is used. Includes an application for cheap gift-shopping.
It's because they are trying to come up with a good explanation of what to buy. And "this one is better than this other one" is a pretty simple and easily available kind of explanation to create.
The Allais Paradox (and subsequent followups) - Offered choices between gambles, people make decision-theoretically inconsistent decisions.
How do you know that kind of thing and still think people reason in a Bayesian way? They don't. They just guess at what to gamble, and the quality of the guesses is limited by what criticisms they use. If they dont' know much math then they don't subject their guesses to much mathematical criticism. Hence this mistake.

Profile of Eric Schadt

1 NancyLebovitz 08 April 2011 06:09PM

This article leaves me with a very mixed impression-- it's excessively gosh-wow, but it matches my beliefs that things are generally more complex than they look, and this is especially true about biology.

Schadt doesn't seem to have much web presence, which makes it harder to judge anything about what he's doing.

His background was very intellectually deprived-- he's ended up doing serious biology (or at least in high-status jobs in the field) through a combination of moderately good luck and extremely high drive. It leaves me wondering how much talent just gets lost.

ETA: I forgot to mention that the reason I posted is that if Schadt is right that biological systems are extremely complex, that it isn't feasible to develop drugs based on counteracting the effects of single genes, but that this complexity can be met by people doing networked science, then it's very important.

The null model of science

19 Johnicholas 26 March 2011 01:53PM

Jonah Lehrer wrote about the (surprising?) power of publication bias.

http://m.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer?currentPage=all

Cosma Shalizi (I think) said something, or pointed to something, about the null model of science - what science would look like if there were no actual effects, just statistical anomalies that look good at first. I can't find the reference, though.

 


[Link] The New Humanism

2 curiousepic 09 March 2011 02:19PM

Sleeping Beauty

-3 DanielLC 01 February 2011 10:13PM

Someone comes up to you and tells you he flipped ten coins for ten people. They were fair coins, but only three came up heads. What is the probability yours was heads?

There are three people of ten who got heads. There is a 30% chance that you're one of those three, right?

Now take the sleeping beauty paradox. A coin is flipped. If it lands on heads, the subject is woken twice. If it lands on tails, the subject is woken once. For simplicity, assume it happens exactly once, and there are one trillion person-days. You wake up groggy in the morning, and take a second to remember who you are.

If the coin landed on tails, that would mean that there is a one in a trillion chance that you will remember that you're the subject. If it was heads, it would be two in a trillion. As such,  if you do remember being the subject, the probability that it's heads is P(H|U)=P(U|H)*P(H)/[P(U|H)+P(U|T)] = (2/trillion)*(1/2)/[(2/trillion+1/trillion)] =2/3, where H is coin lands on heads, T is coin lands on tails, and U is you are the subject.

Technically, it would be slightly less than 2/3, since there will be one more person-day if the coin lands on heads.

A Possible Solution to Parfit's Hitchiker

-5 Dorikka 28 January 2011 07:21PM

I had what appeared to me to be a bit of insight regarding trade between selfish agents. I disclose that I have not read TDT or any books on decision theory, so what I say may be blatantly incorrect. However, I judged that posting this here was of higher utility rather than waiting until I had read up on decision theory -- I have no intention of reading up on decision theory any time soon because I have more important (to me) things to do. This is not meant to deter criticism of the post itself -- please tell me why I'm wrong if I am. The following paragraph is primarily an introduction.

When a rational agent predicts that he is interacting with another rational agent and that the other agent has motive for deceiving him, (and both have a large amount of computing power), he will not use any emotional basis for ‘trust.’ Instead, he will see the other agent’s commitments as truth claims which may be true or false depending what action will optimize the other agent’s utility function at the time which the commitment is to be fulfilled. Agents which know something of the each other’s utility function may bargain directly on such terms, even when each of their utility functions are largely (or completely) dominated by selfishness.

This leads to a solution to Parfit’s hitchhiker, allowing selfish agents to precommit to future trade. Give Ekman all of your clothes and state that you will buy them back from him when you arrive with an amount higher than the worth of your clothes to him but lower than the worth of your clothes to yourself. Furthermore, tell him that because you don’t have anything more on you, he can’t get any more money off of you than an amount infinitesimally smaller than your clothes are worth to you, and accurately tell him how much your clothes are worth to yourself (you must tell the truth here due to his microexpression-reading capability.) He should judge your words as truth, considering that you have told the truth. Of course, you lose regardless if the value of your clothes to yourself is less than the utility he loses by taking you to town.

Assumptions made regarding Parfit's hitchhiker: 1. Physical assault is judged to be of very low utility by both agents and so isn't a factor in the problem. 2. Trades in the present time may be executed without prompting an infinite cycle of "No, you give me X first."

View more: Prev | Next