All of marc's Comments + Replies

marc110

I attended the minicamp last summer, at more personal expense than most participants, since I flew in from europe (I did have other things to do in California, so the cost wasn't entirely for minicamp).

If you want an analogy with minicamp, think of an academic summer school. At the most important level, I think the only thing that really separates minicamp (or an academic summer school) from christian camps is that the things they teach at minicamp (and summer schools) are mostly correct.

I go to summer schools to learn from people who have thought about t... (read more)

marc10

This isn't hugely relevant to the post, but LessWrong doesn't really provide a means for a time-sensitive link dump, and it seems a shame to miss the opportunity to promote an excellent site for a slight lack of functionality.

For any cricket fans that have been enjoying the Ashes, here is a very readable description of Bayesian statistics applied to cricket batting averages.

0NancyLebovitz
That seems like a good thing to post in Discussion. Also, to the extent that it's about the math more than the particular matches, it isn't all that time sensitive.
marc80

Although I didn't actually comment, I based my choice on the fact that most people only seem to be able cope with two or three recursions before they get bored and pick an option. The evidence for this was based on the game where you have to pick a number between 0-100 that is 2/3 of the average guess. I seem to recall that the average guess is about 30, way off true limit of 0.

3JenniferRM
The true limit would be 0 if everyone was rational, and aware of the rationality of everyone else, but rational people in the real world should be taking into account... So what you should do, based on that, is try to figure out how many iterations "most people" will do, and then estimate the smaller percentage of the "rank one pragmatic rationalists" who will realize this and aim for 2/3 of "most people" and so on until you have accounted for 100% of the population. The trick is knowing that some people aren't logical means logical strategy (that will actually win) requires population modeling rather than game theory. Hearing your average of 30 makes me hypothesize that the distribution of people looks something like this: 0.1% 100 Blind number repeaters 4.9% 66 Blind multipliers 17% 44 Beating the blind! 28% 30 "most people" 20% 20 Read a study or use a "recurse three times" heuristic 15% 13 Contrarian rationalists 10% 9 Meta-contrarian rationalists 4.9% 6 Economists (I kid!) 0.1% 0 Obstinate game theorists Suppose you run the experiment again (say, on a mixture of people from LW who have read this combined with others) where you expect that a lot of people might have actually seen this hypothesis, but a lot of people will naively play normally. I think the trick might be to figure out what the percentage of LWers you're dealing with is and then figure out what to do based on that. I'm tempted (because it would be amusing) to try to estimate the percentage of LWers relative to naive, and then model the LWers as people who execute some variant of timeless decision theory in the presence of the baseline people. If I understand timeless decision theory correctly, the trick would be for everyone to independently derive the number all the LWers would have to pick in order for us all to tie at winning given the presence of baselines. It seems kind of far fetched, but it would be ticklesome if it ever happened :-D OK, now I'm curious! I think I might try thi
marc00

I'd be interested. So far my schedule has prevented me from attending most of the London meetups, and I live there, so i can't guarantee anything.

marc00

I think you're probably correct in your presumptions. I find it an interesting idea and would certainly follow any further discussion.

marc00

I don't think you'd have much success mastering non verbal communication through skype.

marc50

I think it may have something to do with limiting violence.

I'm trying to remember the reference (it might be Hanson or possibly the book the Red Queen Hypothesis - if I remember I'll post it) but a vast majority of violence is over access to women, at least in primitive societies. Obviously mongamy means that the largest number of males get access to a female, thereby reducing losses in violent competition to females. I think this would certainly explain why rich societies tend to be monogamous - less destructive waste.

Additionally I can imagine societie... (read more)

marc40

This might be of interest to people here; it's an example of a genuine confusion over probability that came up in a friends medical research today. It's not particularly complicated, but I guess it's nice to link these things to reality.

My friend is a medical doctor and, as part of a PhD, he is testing peoples sense of smell. He asked if I would take part in a preliminary experiment to help him get to grips with the experimental details.

At the start of the experiment, he places 20 compounds in front of you, 10 of which are type A and 10 of which are type B... (read more)

0Morendil
This sounds like a great applied exercise for Chapter 3 on elementary sampling theory. ;)
5simplicio
Probability that both compounds are A = P(1st is A)P(2nd is A) = (1/2)(9/19) = 0.24 Probability that both are B = 0.24 Probability that both are same = 0.47 Probability that they are different = 0.53 Conclusion: Always predict they are different.
7Alicorn
Guess they're different every time. There are more pairs of different compounds from a selection group than pairs of same ones. (For any given compound, there are 9 matches and 10 non-matches.)
2JoshuaZ
More detail in the protocol would be helpful. For example, do you get to repeatedly use the same bottle of the set? If so, I can do the following for a 20 trial set: Pick bottle 1. Now run through the other 19 bottles and guess for each that it is different from bottle 1. I'll be correct 10 out of 19 trials. This method generalizes in a fairly obvious fashion although it isn't clear to me if one is going to do n trials whether this is actually the optimal procedure for maximizing how often you are correct. I suspect that one can do better but it isn't clear to me how.
marc00

I'm interested, definitely online, possibly IRL. I'm in London.

marc00

I'm going to the H+ event but I'm also going to the dinner, so I'm not sure how that will fit in with the pub. If I can make it, I will.

I'll also come to the 6/6 meet up.

0dw2
We plan to start taking food orders in the Little Italy restaurant at 6pm.
marc10

Nope. You've misunderstood counter-signalling. Alicorn wrote a great post about it.

1pwno
I've read it... and I disagree with it.
marc30

I agree. But that doesn't stop people getting high status behaviours confused with counter-signalling (like with standing up straight) and therefore, makes making these lists difficult.

-1pwno
Normally when someone does something high status and people’s reaction is “who does this person think he is?” the person signaled lower status somehow via other factors or past behaviors. So this "counter-signaling" is really people acting the status level others consider appropriate. For example, blowing your nose in a job interview is a high status move, but displays inappropriate status level. The fact you're interviewing for a job is evidence your status is lower than the interviewer's - stronger evidence then your high status move.
marc10

This wasn't really meant as the thrust of the comment. I was trying to raise awareness of the difficulty of creating an absolute list of high status behaviours when people can counter signal. It means that there are always exceptions.

But since you replied to this aspect:

I think I now understand. Are you using "standing up straight" in an extremely literal way? If you mean that standing to attention - in an uncomfortable military style - is low status, then i would agree. I don't think those models prove anything except that, within the bounds of what normal people would call standing up straight, they pretty much do.

3pwno
You can only "counter-signal" when you already have high status established, regardless by which means. If you're starting off with no pre-established status, then there exists a list of absolute high status behaviors, i.e. behaviors that are evidence of your high status.
marc50

The problem with trying to define a list of high status actions is that they are context dependent.

Counter-signalling means that, in a particular context, it could be higher status to perform in a manner that, in any other context, would appear low status.

Under most general circumstances though, good posture is high status (because the assumption is that they just stand like that - not that they are standing like that to make an impression). In general, people don't think as carefully as you about motivations. You are over-iterating your thinking beyond what an average person would ever consider. Go out and look at people on the street and see how the high and low status people stand.

0pwno
Just look at male models, they never stand straight. People here incorrectly assume the alternative to standing up straight is slouching.
marc20

I'm sure I can sort out a room at UCL. I'll find out whether it would be free.

UCL is particularly convenient for transport links since Euston and Kings Cross are <10mins walk and Paddington is a short tube ride away.

There are some nice little restaurants and pubs around for food/drink.

marc30

I think it's important not to conflate two separate issues.

The term 'science' is used to denote both the scientific method and also the social structure that performs science. It's critical to separate these in ones mind.

What you call "idealistic science" is the scientific method; what you call "social network" science is essentially a human construct aimed at getting science done. I think this is basically what you said.

The key point, and where I seem to disagree with you, is that these views are not mutually exclusive. I see 'social n... (read more)

0billswift
I have realized I worded this rather poorly, that was one of the reasons for getting it out for feedback. All science is social - the idealistic and the signaling - the difference is whether the search for knowledge (the idealistic view) is primary or whether the signaling or social issues are primary. It is far too easy to fool yourself, the feedback from other researchers is really necessary for science to advance. The problem is that too many now seem to feel excessive social pressures to conformity. At least in part, do to the institutional/academic/bureaucratic control over science, especially its funding.
marc50

I'm about to start writing up my doctoral thesis in experimental quantum computing.

If people are interested I might be able to write a few posts introducing quantum computing/quantum algorithms and many worlds over the next couple of months. I'm by no means an expert in the theory side, but I'll try to chat about it with people who are.

From a personal perspective it might help me to start the words flowing.

1Yorick_Newsome
At first I wanted to say, "Please do, that would be awesome!", but then I realized it may not be within the domain of 'refining the art of rationality'. Anyone have any rationalizations so that we could talk about quantum computing at Less Wrong? There have been posts on the singularity, after all.
marc30

I hadn't realised that you were taking the karma ratings as indicative of agreement. I didn't vote it down before because I have tended only to use my downvote on stupid or thoughtless comments - not valid comments that disagree with what I think.

Once it became clear that you thought that the votes weren't just appreciating effort but were signalling agreement it would have been dishonest not to vote it down.

0SilasBarta
I don't think voting down indicates disagreement, nor do I believe people should use mere disagreement as a reason to vote down. My point was that you can artificially increase the merit of your point by voting down my summary so as to make it look less appreciated.
marc30

I don't agree with your summary.

By your own admission you haven't watched the entire talk. That might make it difficult to provide a full review.

By reducing what Deutsch said to the conjunction fallacy you missed the different emphasis that both Vladimir and I found interesting. If the people that voted up your comment didn't watch the talk (which seems plausible because of the negative nature of the review) then they wouldn't appreciate the difference between what Deutsch says and what you say. Therefore they aren't agreeing with your summary, they're simply appreciating your effort.

-1SilasBarta
I summarized what was important to LW readers. I skipped through the parts of the video that most LWers would have found uninteresting (people used to posit theories with unnecessary details called "myths"? who knew?) so I could get to Deutsch's new explanation of explanation which amounts to "unnecessary details are bad" (which are equivalent to "easy-to-vary" aspects). Yes, you may have found it interesting. It still would have been nice to know the basic form of Deutsch's point before blowing ~15 minutes listening to boring stuff just to get to something that can be restated in a few sentences. (Modding my appreciated summary down sure helps your argument though.) I welcome anyone else to blow 20 minutes of their life to confirm my summary.
marc00

I think that 'whilst preserving the predictions' was assumed. Otherwise what's the constraint that's making things hard?

Perhaps it's clearer when written more explicitly though.

2SilasBarta
It is assumed; it's just not clear to someone who's told that that's Deutsch's idea. And I certainly wasn't alone in not realizing what "hard to vary" means here; Vladimir_Nesov already had a +5 comment with the term that attempted to summarize the lecture, but my comment with the fuller explanation still got modded up to 4 and some thanks. This probably wouldn't have happened if Nesov's summary, using just "hard to vary", were already clear enough.
marc50

I agree that there's nothing new to people who have been on Overcoming Bias and Less Wrong for a few years (hence the cautionary statement at the start of the post) but I do think it's important that we don't forget that there are new people arriving all the time.

Not everyone would consider "the conjunction fallacy and how each detail makes your explanation less plausible" a standard point. We shouldn't make this site inaccessible to those people. Credit where it's due - Deutsch does a nice job of presenting this in a way that most people can understand.

5SilasBarta
I don't think his way of explaining it is any easier for a newcomer. It doesn't make sense unless and until you already have a firm grasp of the basis for Occam's razor. And if you know how to justify Occam's razor, you already understand why adding details penalizes the explanation's probability. Furthermore, his idea can't be summarized as "good explanations are hard to vary". It's more like, "good explanations are hard to vary while preserving their predictions". I do appreciate that you added a summary.
marc10

That's a fair point, but I've never actually seen it mentioned explicitly. Maybe there should be a 'tips on writing posts' post.

3SilasBarta
Actually, you shouldn't make any top-level post whose sole contribution is a link unless you summarize it (in the case of an argument) or explain its significance with examples (in the case of an information source like an FAQ). It's just that this applies especially for audio and video, which have additional time-commitment and searchability constraints. I'll make a note in anything I can edit about this guideline. I really think it's common sense though: do your homework, and respect others' patience.
marc20

I guess that quantum computers halve the doubling time, as compared to a classical computer, because every extra qubit squares the available state space. This could give the factor two in the exponential of Moore's law.

Quantum computing performance currently isn't doubling but it isn't jammed either. Decoherence is no longer considered to be a fundamental limit, it's more a practical inconvenience. The change that brought this about was the invention of quantum error correcting codes.

However experimental physicists are still searching for the ideal practi... (read more)

0timtyler
I looked at: http://en.wikipedia.org/wiki/Quantum_error_correction The bit about the threshold theorem looks interesting. However, I would be more impressed by a working implementation ;-)
marc110

Were this true it would also seem to fit with Robin's theories on art as signalling. If you pick something bad to defend then the signal is stronger.

If you want to signal loyalty, for example, it's not that good picking Shakespeare. Obviously everyone likes Shakespeare. If you pick an obscure anime cartoon then you can really signal your unreasonable devotion in the face of public pressure.

In a complete about turn though, a situation with empirical data might be sports fans. And I'm fairly certain that as performances get worse, generally speaking, the number of fans (at least that attend games) drops. This would seem to imply the opposite.

2NancyLebovitz
I don't think really liking Shakespeare is considered normal.
0[anonymous]
Yes, sports is the exception that explains the rule. The rule is that fandom requires some type of exclusivity to inspire your devotion. It's about identity. Star Trek fans really like Star Trek, but I suspect, even more, they like they fact -- when they're convening -- that they have something special in common that they all recognize. In some way, I'm too normal to go to a Star Trek convention -- don't worry, you won't see me there. But at an Indiana Jones convention, if you could muster the enthusiasm to go, you might see anyone. Sports is only a half exception. You have devoted fans and "fair-weather" fans, depending on whether they identify with "their team" no matter matter what or only if it's doing well. Perhaps fandom is a function of having appealing qualities/message and being able to create identification. I think there are certain identification "holes" here on Less Wrong so if someone with authority (like Robin) started filling those holes there would be sub-fandoms.
marc00

I agree that the quality of the argument is an important first screening process in accepting something into the rationality canon. In addition, by truly understanding the argument, it can allow us to generalise or apply it to novel situations. This is how we progress our knowledge.

But the most convincing argument means nothing if we apply it to reality and it doesn't map the territory. So I don't understand why I'd be crazy to think well of Argument screens off authority if reading it makes me demonstrably more rational? Could you point me towards the earlier comments you allude to?

marc00

Can you clarify?

Exactly which material are you referring to? What basis would you suggest that you're assessing it on?

2Paul Crowley
I mean the bulk of Eliezer's 300-odd OB/LW posts. To use an example I've used before, you'd be crazy to say that you think well of Argument screens off authority because you have empirically demonstrated that reading it makes you more rational. I find its argument persuasive. Obviously one must be wary of the many ways you can find something persuasive that are not related to merit, but to carry away from the study of cognitive bias the message that one should not be persuaded by any argument ever would be to give up on thinking altogether.
marc50

If you don't attempt to do something while you develop your rationality then you're not constraining yourself to be scored on your beliefs effectiveness. And we know that this makes you less likely to signal and more likely to predict accurately.

7Paul Crowley
* I think that for the most part, where rationality is easily assessed it is already well understood; it is in extending the art to hard-to-assess areas that the material here is most valuable. * For all I know all of Eliezer's original work apart from his essays on rationality could be worthless. Both of these things mean that we're assessing this material on a different basis than demonstrated efficacy.
marc20

I agree for the most part with Tom. Here's a quote from an article that I drafted last night but couldn't post due to my karma:

"I read comments fairly regularly that certainly imply that people are less successful or less fulfilled than they might be (I don't want to directly link to any but I'm sure you can find them on any thread where people start to talk about their personal situation). Where are the posts that give people rational ways to improve their lives? It's not that this is particularly difficult - there's a huge psychological literature o... (read more)

marc10

I think that you can legitimately worry about both for good reasons.

Fast growth is something to strive for but I think it will require that our best communicators are out there. Are you concerned that rationality teachers without secret lives won't be inspiring enough to convert people or that they'll get things wrong and head into death spirals?

From a personal perspective i don't have that much interest in being a rationality teacher. I want to use rationality as a tool to make the greatest success of my life. But I also find it fascinating and, in an ide... (read more)

marc10

I guess the failure mode that you're concerned with is a slow dilution because errors creep in with each successive generation and there's no external correction.

I think that the way we currently prevent this in our scientific efforts is to have both a research and a teaching community. The research community is structured to maximise the chances of weeding out incorrect ideas. This community then trains the teachers.

The benefits of this are that you get the people who are best at communicating doing the teaching and the people who are the best at research... (read more)

1Eliezer Yudkowsky
Hm. Arguably I should only be worried about fast dilution rather than slow dilution. But I'm also worried that the community grows slower if it's inward-looking, and hope for faster growth if it's involved with the outside world. Entirely possible. But I'm not sure I have so much faith in the system you describe, either. The most powerful textbooks and papers from which I get my oomph are usually not by people who are solely teachers - though I haven't been on the lookout for exceptions, and I should be.
marc00

Is it possible that humans, with their limited simulation abilities, do not have the mental computational resources to simulate an irrational persons more effective beliefs?

This would mean that the 'irrational' course of action would be the more effective.

0pwno
Even if they can't model their behaviors like they do for normal people, that doesn't mean there is some systematic way of rationally predicting their behaviors.
marc20

I definitely enjoyed the meet up.

In defence of my fairly poor estimate I was unconvinced by the assumption that all the maise in mexico was eaten by mexicans. It seemed that this was an uncontrolled assumption but i felt that i could put reasonable bounds on all the assumptions in the land area estimate (if you're asking, yes, the final answer did fall within my 90-10 bounds :) ).

Hopefully with a bit more notice we can get a few extra people next time but i think it was a great idea to get the ball rolling. Thanks to Tomasz for organising.

0gwern
Why does it seem so bad? Food products are the most common products countries defend with tariffs and similar policies, and if Mexico is not self-sufficient then even more reason for all consumption to be local.
marc40

What about cases where any rational course of action still leaves you on the losing side?

Although this may seem to be impossible according to your definition of rationality, I believe it's possible to construct such a scenario because of the fundamental limitations of a human brains ability to simulate.

In previous posts you've said that, at worst, the rationalist can simply simulate the 'irrational' behaviour that is currently the winning strategy. I would contend that humans can't simulate effectively enough for this to be an option. After all we know th... (read more)

marc20

I'm in London

marc130

Have you really never seen this before? I actually find that I myself struggle with it. When you define yourself as the plucky outsider it's difficult and almost unsatisfying when you conclusively win the argument. It ruins your self-identity because you're now just a mainstream thinker.

I've heard of similar stories when people are cured of various terminal diseases. The disease becomes so central to their definition of self that to be cured makes them feel slightly lost.

4Eliezer Yudkowsky
I haven't seen it before. Maybe if you counted Stephen J. Gould, but I expect he was lying more than crazy. I guess most of the people I know are, shall we say, secure enough in their identity as iconoclasts, that they can enjoy winning any particular argument without fear. Hadn't heard about the case of the terminal diseases, either.