All of Vulture's Comments + Replies

I mean, you could run correlations with Openness to experience or with age, right? I guess there's probably too small of a sample size to do a lot of interesting analysis with it, but I'm sure one could do some.

4Vaniver
There's a first order effect--trans people having higher openness--and a second order effect--high openness communities having more trans people--and I'm more confident of the second than I am of the first. When it comes to age, a quick calculation of average by group (throwing out three age outliers): F_______26.8 MtF_____23.5 M_______28.7 FtM_____23.1 Other___26.1 Overall_28.1

Woah, awesome! I would love to see something like this for the whole collection.

Twist them the way you're twisted.

Or rather, don't, unless you think they have so much agency that this change in temperament will improve their utility despite massively reducing their level of satisfaction.

Suppose I think, after doing my accounts, that I have a large balance at the bank. And suppose you want to find out whether this belief of mine is "wishful thinking." You can never come to any conclusion by examining my psychological condition. Your only chance of finding out is to sit down and work through the sum yourself.

-- C. S. Lewis

1lmm
This seems obviously false. Am I missing something?
1Jiro
Something can be wishful thinking and true at the same time. Doing the sum wouldn't prove that it's not wishful thinking. Of course having the sum be correct is a necessary condition for non-wishful thinking, but it does not determine the existence of non-wishful-thinking all by itself.

The market isn't particularly efficient. For example, if you bought "No" on all the presidential candidates to win, it would cost $16.16, but would be worth at least $17 for a 5% gain. Of course, after paying the 10% fee on profits and 5% withdrawal fee you would be left with a loss, which is why this opportunity still exists.

Does this affect the accuracy of the market? Serious question; I do not understand the nitty-gritty economics very well.

6Vaniver
Yes. Think of the transaction costs as an upper bound on error. Any wrong beliefs that are off by more than the transaction costs are "free money" to correct, and thus people will spend time and effort looking for and correcting those errors, but any wrong beliefs that are off by less than the transaction costs cost money to correct, and so won't be corrected. For example, suppose there were a 6% difference between reality (here, the union of mutually exclusive and exhaustive options) and the market. Then the 10% profit fee would drop that down to 5.4%, and then the 5% withdrawal fee would drop that down to 0.4%, and there would be an opportunity to make money--until the price had shifted so the difference between reality and the market was 5%. (If you already have money in the system, and thus the 5% withdrawal fee is a sunk cost, any mispricing is worth correcting. Putting money into the system is what the withdrawal fee disincentivizes, and so only mispricings more than 5% will attract new money.) [Edit]And, of course, the fact that the contracts are not instant adds further costs. Yes, you could buy up the contracts and get a certain 0.4% gain (in the previous example), but if that's a certain 0.4% gain six months from now that may actually represent a negative real return, and thus the actual error upper bound is even higher.
3Lumifer
It directly affects your error estimates. If by "accuracy" you mean "forecasting capability" then inefficiency is a symptom of underlying problems (e.g. too few market participants) which do affect accuracy.

Just as a little bit of a counterpoint, I loved the 2006-2010 ebook and was never particularly bothered by the length. I read the whole thing at least twice through, I think, and have occasionally used it to look up posts and so on. The format just worked really well for me. This may be because I am an unusually fast reader, or because I was young and had nothing else to do. But it certainly isn't totally useless :P

7bramflakes
Oh, I didn't mean to imply I didn't like it! It was a welcome companion for hundreds of long school bus journeys.

Oh, I see, haha. Yes, that makes more sense, and your point is well-taken.

Why would anyone bother to send in false data about their finger-length ratios?

5gedymin
I meant that as a comment to this: It's easy to lie when answering to questions about your personality on e.g. a dating site. It's harder, more expensive, and sometimes impossible to lie via signaling, such as via appearance. So, even though information obtained by asking questions is likely to be much richer than information obtained from appearances, it is also less likely to be truthful.

Working from memory, I believe that when asked about AI in the story, Eliezer said "they say a crackpot is someone who won't change his mind and won't change the subject -- I endeavor to at least change the subject." Obviously this is non-binding, but it still seems odd to me that he would go ahead and do the whole thing that he did with the mirror.

This makes some sense, but if Quirrell could bamboozle the map, surely he wouldn't do so in such a way as to reveal vitally important and damaging secrets to his enemies.

0Astazha
I can't figure out what you mean by "reveal vitally important and damaging secrets to his enemies." Would you expand on that please?

I think the word Gunnar was going for was "Yudkowskyesquely", unfortunately.

In my opinion the gamma function is by far the stupidest. IME, the off-by-one literally never makes equations clearer; it only obfuscates the relationship between continuous and discrete things (etc.) by adding in an annoying extra step that trips up your intuition. Seems like simple coordination failure.

If that effect came as a surprise, it couldn't have been the reason for the split.

The wiki of a million lies

As clever as this phrase is, it is tragically ambiguous. I'm guessing 65% chance Wikipedia, 30% RationalWiki, 3% our local wiki, 2% other. How did I do?

3Alsadius
None of the other wikis you list are big enough to have more than maybe 75,000 lies.
6Error
I meant Wikipedia. I've actually never heard the phrase applied to any other wiki. It's certainly not original to me.

Is it really a "bad question"? Shouldn't a good calibrator be able to account for model error?

1devas
Depends on whether you consider "being able to comprehensively understand questions that may be misleading" to be a subset of calibration skills.

Yayy! I was having a shitty day, and seeing these results posted lifted my spirits. Thank you for that! Below are my assorted thoughts:

I'm a little disappointed that the correlation between height and P(supernatural)-and-similar didn't hold up this year, because it was really fun trying to come up with explanations for that that weren't prima facie moronic. Maybe that should have been a sign it wasn't a real thing.

The digit ratio thing is indeed delicious. I love that stuff. I'm surprised there wasn't a correlation to sexual orientation, though, since I se... (read more)

1emr
On the computer game question: Isn't there an implicit "X is true and X will be marked correct by the rater"? You'd hope these two are clearly aligned, but if you've taken many real-world quizzes, you'll recognize the difference.
8[anonymous]
The correct answer is Tetris. The question should have been what is the best selling personal computer game of all time? Mobile phones are technically computers too. I'm not sure how much difference that would have made.

I remember answering the computer games question and at first feeling like I knew the answer. Then I realized the feeling I was having was that I had a better shot at the question than the average person that I knew, not that I knew the answer with high confidence. Once I mentally counted up all the games that I thought might be it, then considered all the games I probably hadn't even thought of (of which Minecraft was one), I realized I had no idea what the right answer was and put something like 5% confidence in The Sims 3 (which at least is a top ten game). But the point is that I think I almost didn't catch my mistake before it was too late, and this kind of error may be common.

I was confident in my incorrect computer game answer because I had recently read this Wikipedia page List of best-selling video games remembered the answer and unthinkingly assumed that "video games" was the same as "computer games".

0devas
I think the computer games question has to do with tribal identity-people who love a particularly well known game might be more inclined to list it as being the best seller ever and put down higher confidence because they love it so much. Kind of like owners of Playstations and Xboxs will debate the superiority of their technical specs regardless of whether they're superior or not.

Damn, I didn't intend to hit that Retract button. Stupid mobile. In case it wasn't clear, I do stand by this comment aside from the corrections offered by JoshuaZ.

In the Bayesian view, you can never really make absolute positive statements about truth anyway. Without a simplicity prior you would need some other kind of distribution. Even for computable theories, I don't think you can ever have a uniform distribution over possible explanations (math people, feel free to correct me on this if I'm wrong!); you could have some kind of perverse non-uniform but non-simplicity-based distribution, I suppose, but I would bet some money that it would perform very badly.

[This comment is no longer endorsed by its author]Reply
7Vulture
Damn, I didn't intend to hit that Retract button. Stupid mobile. In case it wasn't clear, I do stand by this comment aside from the corrections offered by JoshuaZ.
-2Metus
You can act "as if" by just using the likelihood ratios and not operating with prior and posterior probabilities.
5JoshuaZ
Consistency forces you to have a simplicity based prior if you have a counteable set of non-overlapping hypotheses described using some finite collection of symbols (and some other minor conditions to ensure non-pathology). See prior discussion here. See also here for related issues.

I haven't looked into it much myself, but a couple of people have mentioned RibbonFarm as being something like that.

In terms of Death Note, I've read the first several volumes and can vouch that it's a fun, "cerebral" mystery/thriller, especially if you like people being ludicrously competent at each other, having conversations with multiple levels of hidden meaning, etc. Can't say there's anything super rational about it, but the aesthetic is certainly there.

3Desrtopa
Actually I for one gave up Death Note in frustration very early on because I couldn't help focusing on how much of the real inferential work was being done by the authors feeding the correct answers to the characters. Like when L concludes that Kira must know the victim's real name to kill him... there were so many reasons that just didn't work. Kira's apparent modus operandi was to kill criminals, there was no particular reason to suppose he would respond to a challenge to kill anyone else, so the fact that he didn't was already weak evidence regarding whether he could at all, let alone what the restrictions might be. Whether Kira knew his real name or not was just one variable switched between him and Lind L. Taylor. L could just as easily have been immune because he eats too many sweets. While smart, knowledgeable people can often extract a greater yield of inference from a limited amount of data than others, I find that far too many writers take this idea and run with it while forgetting that intelligence very often means recognizing how much you can't get out of a limited amount of data.

Is it possible that some of the reported "rationality content" was more like genre-savviness which is more visible to people who are very familiar with the genre in question?

3Richard_Kennaway
I think it was more a case of people looking at the works with the hammer of rationality in their hand and seeing lots of nails for the characters to knock in. For example, The Melancholy of Haruhi Suzumiya sets up a problem (Unehuv vf Tbq naq perngrq gur jbeyq 3 lrnef ntb ohg qbrfa'g ernyvfr vg, naq vs fur rire qbrf gura fur zvtug haperngr vg whfg nf rnfvyl), but I found that setup fading into the background as the series of DVDs that I watched went on. By the fourth in the series (the murder mystery on the island isolated by storms), it was completely absent. With Fate/Stay Night, one problem is that I was looking at ripped videos on Youtube, while the original material is a "visual novel" with branching paths, so it's possible (but unlikely) that the people who put up the videos missed all the rationality-relevant bits. I've not tried Death Note, but I suspect I'd find the same dynamic as in Haruhi Suzumiya. A hard problem is set up (how does a detective track down someone who can remotely kill anyone in the world just by knowing their name?), which makes it possible to read it as a rationality story, but unless the characters are actually being conspicuously rational beyond the usual standards of fiction, that won't be enough. I'm also not part of the anime/manga community: I watched these works without any context beyond the mentions on LessWrong and a general awareness of what anime and manga are. It's weird how the girls all look like cosplay characters. :)

I really like the Inuit one.

Thank god for the use-mention distinction :-)

I occasionally remember to keep pencil + paper by my bed for this reason, so that I can write such things down in the dark without having to get up or turn on a light. Even if the results aren't legible in the usual sense, I've almost always been able to remember what they were about in the morning.

Eliezer seems to be really really bad at acquiring or maintaining status. I don't know how aware of this fault he is, since part of the problem is that he consistently communicates as if he's super high status.

Eliezer is kind of a massive dork who also has an unabashedly high opinion of himself and his ideas. So people see him as a low-status person acting as if he is high-status, which is a pattern that for whatever reason inspires hatred in people. LessWrong people don't feel this way, because to us he is a high-status person acting as if he is high-status, which is perfectly fine.

Also, one thing he does that I think works against him is how defensive he gets when facing criticism. On Reddit, he occasionally will write long rants about how he is an unfair tar... (read more)

Musk thinks there's an issue in the 5-7 year timeframe

Hopefully his enthusiasm (financially) isn't too dampened when that fails to be vindicated.

I, for one, would look forward more to being Evassarated.

4dxu
All right, thanks. So, I gave both articles a read-through, and I think that as described, the system implemented won't necessarily negate the strategy (though it may somewhat reduce said strategy's effectiveness). Really, it all depends on how awesome Gondolinian's quote is; if it's awesome enough to get a rating that's 100% positive, then the display order will be organized by confidence level, which in practice just means a greater number of votes most of the time (more votes → less uncertainty), which in turn means it'll need to be posted earlier, which brings us back to the original situation, blah blah blah etc. (A single downvote, however, would be sufficient to screw up the entire affair, so there's that.) I guess that's why you originally said it would only reduce the strategy's effectiveness, not eliminate it entirely. That's awesome. My metaphorical hat is off to Gondolinian for figuring out a way to game the system--and crucially, take the second step: countering akrasia and actually doing it. Instrumental rationality at its finest.

Thank you for articulating this so well :-)

I think the survey is pushed by SJW trolls

What does this even mean?

I think that we use "Best" (which is a complicated thing other than "absolute points") rather than "Top" (absolute points) precisely to reduce the effectiveness of that strategy.

3dxu
That's interesting. What criterion/criteria does "Best" use, then? And on a different but related note: does it really negate the strategy? I note that, despite using the "Best" setting, this page still tends to display higher-karma comments near the top; furthermore, most of those high-karma comments seem to have been posted pretty early in the month. That suggests to me that Gondolinian's strategy may still have a shot.

For what it's worth, I perceived the article as more affectionate than offensive when I initially read it. This may have something to do with full piece vs. excerpts, so I'd recommend reading the full piece (which isn't that much longer) first if you care.

7Kaj_Sotala
I read just the excerpts, and I still thought that it came off as affectionate.

In addition to what gwern said, it's worth bearing in mind that Harper's is a very literary sort of magazine, and its typical style is thus somewhat less straightforward than most news.

If many people dismiss LW and MIRI and CFAR for similar reasons, then the only rational response is to identify how that "this is ridiculous" response can be prevented.

I agree with your overall point, but I think that "this is ridiculous" is not really the author's main objection to the LW-sphere; it's clearer in the context of the whole piece, but they're essentially setting up LW/MIRI/CFAR as typical of Silicon Valley culture(!), a collection of mad visionaries (in a good way) whose main problem is elitism; ethereum is then present... (read more)

2Punoxysm
You're right, the word "ridiculous" may not be correct. Maybe elitist, insular and postpolitical (which the author clearly finds negative), but the article speaks better for itself than I can. Still, there's plenty of negative impressions (LW is a "site written for aliens") that could be dispelled.

I have to say I appreciated the first description of LessWrong as "confoundingly scholastic".

And here it is, as a pdf! (I finally thought of trying to log in as a subscriber)

7gwern
Thanks. I've excerpted it at http://lesswrong.com/r/discussion/lw/ldv/harpers_magazine_article_on_lwmiricfar_and/

I have it in hard copy, but all attempts so far to scan or photograph it have been foiled. I'm working on it, though; by far the best media piece on Less Wrong I've seen so far.

ETA - To give you an idea: the author personally attended a CFAR workshop and visited MIRI, and upon closer inspection one can make out part of the Map of Bay Area Memespace in one of the otherwise-trite collage illustrations.

Oh, I think we're using the phrase "political movement" in different senses. I meant something more like "group of people who define themselves as a group in terms of a relatively stable platform of shared political beliefs, which are sufficiently different from the political beliefs of any other group or movement". Other examples might be libertarianism, anarcho-primitivism, internet social justice, etc.

I guess this is a non-standard usage, so I'm open to recommendations for a better term.

0Lumifer
Yep, looks like we are using different terminology. The distinction between political philosophy and political movement that I drew is precisely the difference between staying in the ideas/information/talking/discussing realm and moving out into the realm of real-world power and power relationships. What matches your definition I'd probably call a line of political thought. Mencius Moldbug is a political philosopher. Tea Party is a political movement.

I think that at this point it would be fair to say that a movement has developed out of NRx political philosophy.

0Lumifer
Show me that movement in actual politics. Is any NRx-er running for office? Do they have an influential PAC? A think tank in Washington, some lobbyists, maybe?

A reasonable case could be made that this is how NRx came to be.

If this is where NRx came from, then I am strongly reminded of the story of the dog that evolved into a bacterium. An alternative LW-like community that evolved into an aggresive political movement? Either everyone involved was an advanced hyper-genius or something went terribly wrong somewhere along the way. That's not to say that something valuable did not result, but "mission drift" would be a very mild phrase.

0Lumifer
As far as I can see it evolved into mostly smart people writing dense texts about political philosophy. That's a bit different :-)

Motte-and-Bailey effect (when instead of happening inside a person's head, it happens to a movement - when different people from the same movement occupy motte and bailey (I think that individual and group motte-and-bailey's are quite distinct))

This could just as easily be described, with the opposite connotation, as the movement containing some weakmans*, which makes me think that we need a better way of talking about this phenomenon. 'Palatability spread' or 'presentability spread'? But that isn't quite right. A hybrid term like 'mottemans' and 'baile... (read more)

0Sarunas
What I had in mind was a situation when "a person from outside" talks to a person who "occupies a bailey of the movement" (for the sake of simplicity let's call them "a movement", although it doesn't have to be a movement in a traditional sense). If the former notices that the position of the latter one is weakly supported, then the latter appeals not to the motte position itself, but to the existence of high status people who occupy motte position, e.g. "our movement has a lot of academic researchers on our side" or something along those lines, even though the position of the said person doesn't necessarily resemble that of the "motte people" beyond a few aspects, therefore "a person from outside" should not criticize their movement. In other words, a criticism against a particular position is interpreted to be a criticism against the whole movement and "motte people", thus they invoke "a strongman" do deflect the criticism from themselves. I think you made a very good point. From the inside, if an outsider criticizes a certain position of the movement, it looks as if they attacked a weakman of the movement and since it feels like they attacked a movement itself, an insider of the movement feels that they should present a stronger case for the movement, because allowing an outsider to debate weakmen without having to debate stronger positions could give the said outsider and other observers an impression that these weakmen was what the movement was all about. However, from the said outsider's perspective it looks like they criticized a particular position of a movement, but then (due to solidarity or something similar) the movement's strongmen were fielded against them, and from the outsider's perspective it does look like that the movement pulled a move that looks very similar to a motte-and-bailey. I think that replying to old comments should be encouraged. Because otherwise if everyone feels that they should reply as quickly as possible (or otherwise not reply

(That said, Richard Feynman is dead and therefore cannot sexually harass any of his current readers.)

A similar argument could be made that a pre-recorded lecture cannot sexually harass someone either (barring of course very creative uses of the video lecture format which we probably would have heard about by now :P ).

7fubarobfusco
From the MIT press release, it sounds as if the former professor emeritus¹ has been harassing online students through means other than pre-recorded videos. ¹ Would that be a professor demeritus?

I doubt they do. Why would they bother?

0ChristianKl
There's almost definitely government working group for biosafety that think about issues like this. Asking a DNA synthesis company to check their order against a handful of genes and report back when a customer tries to order one of those isn't complicated. The companies inturn prefer informal solutions.

Average article quality is almost certainly going down, but the main driving force is probably mass-creation of stub articles about villages in Eastern Europe, plant genera, etc. Of course, editors are probably spread mpre thinly even among important topics as well. A lot of people seem to place the blame for any and all of Wikipedia's problems on bureaucracy, but as a regular editor such criticisms often seem foreign, like they're talking about a totally different website. True, there's a lot of formalities, but they're mostly invisible, and a reasonably ... (read more)

Everyone knows about your 8chan board, bro :P

When did you get this impression? I'm only asking because I'm given to believe that the situation on wikipedia with regards to experts and specialized subjects has improved substantially starting in about 2008 or so(?), at least in the humanities but possibly in other fields.

0[anonymous]
[deleted duplicate comment]
0satt
Wikipedia is more comprehensive now than in 2008, but I speculate that its average article quality might be lower, because of (1) competent editors being spread more thinly, and (2) the gradual entrenchment of a hierarchy of Wikipedia bureaucrats who compensate for a lack of expertise with pedantry and rules lawyering. (I may be being unfair here? I'm going by faint memories of articles I've read, and my mental stereotype of Wikipedia, which I haven't edited regularly in years.)
9IlyaShpitser
This was in fact prior to 2008 (my advisor asked me to change something in the Bayesian network article, and I got into a slight edit war with the resident bridge troll who knew a lot less than me, but had more time and whose first reflex was to just blindly undo any edits. These sorts of issues with Wikipedia are very well documented). ---------------------------------------- The horrible article on confounders is another good example. I brought it up before here, and got the "that's like, your opinion" kind of reply. At least they cite Tyler's paper with me now! Of course, this particular case might be more widespread than just Wikipedia, and might be a general confusion in statistics as a field. I went to a talk last week where someone just got this wrong in their talk (and presumably in their research). ---------------------------------------- I don't doubt that there are isolated communities within Wikipedia that generate good content. For example, I know there are Wikipedia articles for some areas of mathematics of shockingly high quality. My point is, when this happens it is a sort of happy cultural accident that is happening in spite of, not because of, the Wikipedia editing model. ---------------------------------------- There has been quite a bit of experimentation online to incentivize experts to talk and non-experts to shut up, recently. I think that's great!

Cultivation of tulpas

This already refers to a similar, but much dicier, technique.

This is a good point. I've gotten past my spiral around Eliezer and am working on crawling out of a similar whirlpool around Yvain, and I think that Elizer's egotistical style, even if it is somewhat justified, plays a big part in sending people down that spiral around him. Seeing him being sort of punctured might be useful, even though I'm sure it's awful for him personally.

Load More