Open Thread: July 2010, Part 2

6 Post author: Alicorn 09 July 2010 06:54AM

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.


July Part 1

Comments (770)

Sort By: Controversial
Comment author: Eneasz 16 July 2010 05:31:15AM 0 points [-]

This is a brief excerpt of a conversation I had (edited for brevity) where I laid out the basics of a generalized anti-supernaturalism principle. I had to share this because of a comment at the end that I found absolutely beautiful. It tickles all the logic circuits just right that it still makes me smile. It’s fractally brilliant, IMHO.

(italics are not-me)


So you believe there is a universe where 2 + 2 = 4 or the law of noncontradiction does not obtain? Ok, you are free to believe that. But if you are wrong, I am sure that you can see that there is an order of existence beyond nature and that therefore the supernatural exists.

If there was a universe where two and two things was not the same as four things, or a universe where something could both be something and NOT that thing, then THAT would be proof of the supernatural. That is basically what the definition of supernatural IS.

If you believe there can’t be a universe where 2 and 2 isn’t the same as 4, and you claim to believe in the supernatural, you are contradicting yourself.

can you give any explanation for why the true definition of supernatural is belief in logical contradiction?

Because anything less is simply naturalism that we don’t understand yet. That sort of god is indistinguishable from a sufficiently advanced alien life. Knowing enough about how reality works to manipulate it in ways that allow you to fly in metal transports, or communicate with someone on the other side of the planet nearly instantly, is not supernaturalism, it’s just applied naturalism. Knowing enough about reality to materialize a unicorn in a church or alter the gravitational constant in a localized area is not supernaturalism, it is just applied naturalism. Any god who is logically consistent can, with enough study, be emulated by man. He does not, in principle, have access to any aspect of reality that is beyond the reach of sufficiently advanced natural creatures.

Thus the only form of supernaturalism that isn’t reducible to applied naturalism is that of literally impossible contradiction. Which is what is generally implied by magic claims. Otherwise they wouldn’t be “magic”, just “technology”.

Do you see the irony in complaining about the logical contradiction of people who claim not to believe in the possibility of logical contradiction but also believe in the supernatural (ie the possibility of logical contradiction)?

Comment author: Morendil 31 July 2010 03:10:34PM 1 point [-]

I don't post things like this because I think they're right, I post them because I think they are interesting. The geometry of TV signals and box springs causing cancer on the left sides of people's bodies in Western countries...that's a clever bit of hypothesizing, right or wrong.

In this case, an organization I know nothing about (Vetenskap och Folkbildning from Sweden) says that Olle Johansson, one of the researchers who came up with the box spring hypothesis, is a quack. In fact, he was "Misleader of the year" in 2004. What does this mean in terms of his work on box springs and cancer? I have no idea. All I know is that on one side you've got Olle Johansson, Scientific American, and the peer-reviewed journal (Pathophysiology) in which Johansson's hypothesis was published. And on the other side, there's Vetenskap och Folkbildning, a number of commenters on the SciAm post, and a bunch of people in my inbox. Who's right? Who knows. It's a fine opportunity to remain skeptical.

-- Jason Kottke

Comment author: NancyLebovitz 31 July 2010 05:39:01PM 7 points [-]

If breast cancer and melanomas are more likely on the left side of the body at a level that's statistically significant, that's interesting even if the proposed explanation is nonsense.

Comment author: Morendil 31 July 2010 06:06:50PM *  3 points [-]

Even so, ISTM that picking through the linked article for its many flaws in reasoning would have been more interesting even than not-quite-endorsing its conclusions.

What I find interesting is the question, what motivates an influential blogger with a large audience to pass on this particular kind of factoid?

The ICCI blog has an explanation based on relevance theory and "the joy of superstition", but unfortunately (?) it involves Paul the Octopus:

We may get pleasure from having our expectations of relevance aroused. We often indulge in this pleasure for its own sake rather than for the cognitive benefits that only truly relevant information may bring. This, I would argue, is why, for instance, we read light fiction. This is why I could not resist the temptation of writing a post about Paul the octopus even before feeling confident that I had anything of relevance to say about it.

(ETA: note the parallel between the above and "I post these things because they are interesting, not because they're right". And to be lucid, my own expectations of relevance get aroused for the same reasons as most everyone else's; I just happen to be lucky enough to know a blog where I can raise the discussion to the meta level.)

Comment author: cupholder 31 July 2010 04:49:15PM *  7 points [-]

Who's right? Who knows. It's a fine opportunity to remain skeptical.

Bullshit. The 'skeptical' thing to do would be to take 30 seconds to think about the theory's physical plausibility before posting it on one's blog, not regurgitate the theory and cover one's ass with an I'm-so-balanced-look-there's-two-sides-to-the-issue fallacy.

TV-frequency EM radiation is non-ionizing, so how's it going to transfer enough energy to your cells to cause cancer? It could heat you up, or it could induce currents within your body. But however much heating it causes, the temperature increase caused by heat insulation from your mattress and cover is surely much greater, and I reckon you'd get stronger induced currents from your alarm clock/computer/ceiling light/bedside lamp or whatever other circuitry's switched on in your bedroom. (And wouldn't you get a weird arrhythmia kicking off before cancer anyway?)

(As long as I'm venting, it's at least a little silly for Kottke to say he's posting it because it's 'interesting' and not because it's 'right,' because surely it's only interesting because it might be right? Bleh.)

Comment author: Morendil 31 July 2010 05:31:22PM 8 points [-]

it's at least a little silly for Kottke to say he's posting it because it's 'interesting' and not because it's 'right'

Yup, that's the bit I thought made it appropriate for LW.

It reminded me of my speculations on "asymmetric intellectual warfare" - we are bombarded all day long with things that are "interesting" in one sense or another but should still be dismissed outright, if only because paying attention to all of them would leave us with nothing left over for worthwhile items.

But we can also note regularities in the patterns of which claims of this kind get raised to the level of serious consideration. I'm still perplexed by how seriously mainstream media takes claims of "electrosensitivity", but not totally surprised: there is something that seems "culturally appropriate" to the claims. The rate at which cell phones have spread through our culture has made "radio waves" more available as a potential source of worry, and has tended to legitimize a particular subset of all possible absurd claims.

Comment author: NancyLebovitz 21 July 2010 06:08:11PM 1 point [-]

What's current thought about how you'd tell that AI is becoming more imminent?

I'm inclined to think that AI can't happen before the natural language problem is solved.

Comment author: SilasBarta 27 July 2010 08:53:02PM 2 points [-]

Something weird is going on. Every time I check, virtually all my recent comments are being steadily modded up, but I'm slowly losing karma. So even if someone is on an anti-Silas karma rampage, they're doing it even faster than my comments are being upvoted.

Since this isn't happening on any recent thread that I can find, I'd like to know if there's something to this -- if I made a huge cluster of errors on thread a while ago. (I also know someone who might have motive, but I don't want to throw around accusations at this point.)

Comment author: Rain 30 July 2010 05:36:25PM *  10 points [-]

I tend to vote down a wide swath of your comments when I come across them in a thread such as this one or this one, attempting to punish you for being mean and wasting peoples' time. I'm a late reader, so you may not notice those comments being further downvoted; I guess I should post saying what I've done and why.

In the spirit of your desire for explanations, it is for the negative tone of your posts. You create this tone by the small additions you make that cause the text to sound more like verbal speech, specifically: emphasis, filler words, rhetorical questions, and the like. These techniques work significantly better when someone is able to gauge your body language and verbal tone of voice. In text, they turn your comments hostile.

That, and you repeat yourself. A lot.

Comment author: xamdam 02 August 2010 02:37:09PM 1 point [-]

I see this as a feature request - would be great to have a view of your recent posts/comments that had action (karma or descendant comments). (rhetorically) If karma is meant as feedback, this would be a great way to get it.

Comment author: NancyLebovitz 28 July 2010 01:23:40AM 4 points [-]

This reminds me of something I mentioned as an improvement for LW a while ago, though for other reasons-- the ability to track all changes in karma for one's posts.

Comment deleted 11 July 2010 12:38:39PM *  [-]
Comment author: nhamann 12 July 2010 08:17:45PM *  6 points [-]

If anyone is interested in seeing comments that are more representative of a mainstream response than what can be found from an Accelerating Future thread, Metafilter recently had a post on the NY Times article.

The comments aren't hilarious and insane, they're more of a casually dismissive nature. In this thread, cryonics is called an "afterlife scam", a pseudoscience, science fiction (technically true at this stage, but there's definitely an implied negative connotation on the "fiction" part, as if you shouldn't invest in cryonics because it's just nerd fantasy), and Pascal's Wager for atheists (The comparison is fallacious, and I thought the original Pascal's Wager was for atheists anyways...). There are a few criticisms that it's selfish, more than a few jokes sprinkled throughout the thread (as if the whole idea is silly), and even your classic death apologist.

All in all, a delightful cornucopia of irrationality.

ETA: I should probably point out that there were a few defenses. The most highly received defense of cryonics appears to be this post. There was also a comment from someone registered with Alcor that was very good, I thought. I attempted a couple of rebuttals, but I don't think they were well-received.

Also, check out this hilarious description of Robin Hanson from a commenter there:

The husband in that article sounded like an annoying nerd. Would I want to be frozen and wake up in a world run by these annoying douchebags? His 'futurecracy' idea seems idiotic (and also unworkable)

I guess that the fatal problem with cryonics is all the freaking nerds interested in it.

Comment author: SilasBarta 11 July 2010 01:32:09AM *  5 points [-]

More on the coming economic crisis for young people, and let me say, wow, just wow: the essay is a much more rigorous exposition of the things I talked about in my rant.

In particular, the author had similar problems to me in getting a mortgage, such as how I get told on one side, "you have a great credit score and qualify for a good rate!" and on another, "but you're not good enough for a loan". And he didn't even make the mistake of not getting a credit card early on!

Plus, he gives a lot of information from his personal experience.

Be warned, though: it's mixed with a lot of blame-the-government themes and certainty about future hyperinflation, and the preservation of real estate's value therein, if that kind of thing turns you off.

Edit: Okay, I've edited this comment about eight times now, but I left this out: from a rationality perspective, this essay shows the worst parts of Goodhart's Law: apparently, the old, functional criteria that would correctly identify some mortgage applicants is going to be mandated as the standard on all future mortgages. Yikes!

Comment author: Armok_GoB 30 July 2010 10:00:40PM *  3 points [-]

(So this is just about the first real post I made here and I kinda have stage fright posting here, so if its horribly bad and uninteresting and so please tell me what I did wrong, ok? Also, I've been frying to figure out the spelling and grammar and failed, sorry about that.) (Disclaimer: This post is humorous, and not everything should be taken all to seriously! As someone (Boxo) reviewing it put it: "it's like a contest between 3^^^3 and common sense!")

1) My analysis of http://lesswrong.com/lw/kn/torture_vs_dust_specks/

Lets say 1 second of torture is -1 000 000 utilions. Because there are about 100 000 seconds in a day, and about 20 000 days in 50 years, that makes -2*10^15 utilions.

Now, I'm tempted to say a dust speck has no negative utility at all, but I'm not COMPLETELY certain I'm right. Let's say there's a 1/1000 000 chance I'm wrong*, in which case the dust speck is -1 utilion. That means the the dust speck option is -1 * 10^-6 * 3^^^3, which is approximately -3^^^3.

-3^^^3 < -10^15, therefore I chose the torture.

2) The ant speck problem.

The ant speck problem is like the dust speck problem, except instead of being 3^^^3 humans that get specks in their eyes, it's 3^^^3 ordinary ants, and it's a billion humans being tortured for a millennia.

Now, I'm bigoted against ants, and pretty sure I don't value them as much as humans. In fact, with 99.9999% certain I don't value ants suffering at all. The remaining probability space is dominated by that moral value is equal to 1000^[the number of neurons in the entity's brain] for brains similar to earth type animals. Humans have about 10^11, ants have about 10^4 That means an ant is worth about 10^(-10^14) as much as a human, if it's worth anything at all.

Now lets multiply this together... -1 utilions * 10^(-10^14) discount * 1/10^6 that ants are worth anything at all * 1/10^6 that dust specks are bad * 3^^^3... That's about -3^^^3!

And for the other side: -10^15 for 50 years. Multiply that with 20, and then with the billion... about -10^25.

-3^^^3 < -10^25, therefore I chose the torture!

((*I do not actually think this, the numbers are for the sake of argument and have little to do with my actual beliefs at all.))

3) Obvious derived problems: There are variations of the ant problem, can you work out and post what if...

  • The ants will only be tortured if also all the protons in the earth decays within one second of the choice, the torture however is certain?

  • Instead of ants, you have bacteria, with behaviour as complicated as to be equivalent of 1/100 neurons?

  • The source you get the info from is unreliable, there's only a 1/googol chance the specks could actual happen, while the torture, again, is certain?

  • All of the above?

Comment author: Wei_Dai 30 July 2010 10:50:22PM 0 points [-]

You might be interested in my post Value Uncertainty and the Singleton Scenario where I suggested (based on an idea of Nick Bostrom and Toby Ord) another way of handling uncertainty about your utility function, which perhaps gives more intuitive results in these cases.

Comment author: Armok_GoB 30 July 2010 11:08:54PM 1 point [-]

I consider these results perfectly intuitive, why shouldn't they be? 3^^^3 is a really big number, it makes sense you have to be really careful around it.

Comment author: jimrandomh 31 July 2010 02:27:07PM 1 point [-]

I assign ants exactly zero utility, but the wild surge objection still applies - you can't affect the universe in 3^^^3 ways without some risk of dramatic unintended results.

Comment author: Armok_GoB 31 July 2010 08:44:43PM 2 points [-]

My argument is that you ALMOST certainly don't care about ants at all, but that there is some extremely small uncertainty about what your values are. The disutility of getting a dust speck in your eye also has that argument.

Comment author: Vladimir_Nesov 30 July 2010 11:18:30PM *  3 points [-]

Lets say 1 second of torture is -1 000 000 utilions. Because there are about 100 000 seconds in a day, and about 20 000 days in 50 years, that makes -2*10^15 utilions.

Given some heavy utilitarian assumptions. This isn't an argument, it's more plausible to just postulate disutility of torture without explanation.

Comment author: Armok_GoB 31 July 2010 01:20:05PM 1 point [-]

It's arbitrarily chosen from the dust speck being -1, I find it easier to imagine one second of torture than years for comparing to something that happens in less than a second. It's just an example.

Comment author: Vladimir_Nesov 31 July 2010 01:30:52PM *  2 points [-]

It's just an example.

The importance of an argument doesn't matter for the severity of an error in reasoning present in that argument. The error might be unimportant in itself, but that it was made in an unimportant argument doesn't argue for the unimportance of the error.

Comment author: WrongBot 26 July 2010 09:12:53PM 3 points [-]

Given all the recent discussion of contrived infinite torture scenarios, I'm curious to hear if anyone has reconsidered their opinion of my post on Dangerous Thoughts. I am specifically not interested in discussing the details or plausibility of said scenarios.

Comment author: ata 24 July 2010 04:53:43AM 3 points [-]

Has anyone been doing, or thinking of doing, a documentary (preferably feature-length and targeted at popular audiences) about existential risk? People seem to love things that tell them the world is about to end, whether it's worth believing or not (2012 prophecies, apocalyptic religion, etc., and on the more respectable side: climate change, and... anything else?), so it may be worthwhile to have a well-researched, rational, honest look at the things that are actually most likely to destroy us in the next century, while still being emotionally compelling enough to get people to really comprehend it, care about it, and do what they can about it. (Geniuses viewing it might decide to go into existential risk reduction when they might otherwise have turned to string theory; it could raise awareness so that existential risk reduction is seen more widely as an important and respectable area of research; it could attract donors to organizations like FHI, SIAI, Foresight, and Lifeboat; etc.)

Comment author: Kevin 24 July 2010 05:58:11AM 1 point [-]

Sure, I've been thinking about it, I need $10MM to produce it though.

Comment author: cerebus 17 July 2010 02:11:24PM *  4 points [-]

Nobel Laureate Jean-Marie Lehn is a transhumanist.

We are still apes and are fighting all around the world. We are in the prisons of dogmatism, fundamentalism and religion. Let me say that clearly. We must learn to be rational ... The pace at which science has progressed has been too fast for human behaviour to adapt to it. As I said we are still apes. A part of our brain is still a paleo-brain and many of reactions come from our fight or flight instinct. As long as this part of the brain can take over control the rational part of the brain (we will face these problems). Some people will jump up at what I am going to say now but I think at some point of time we will have to change our brains.

Comment author: whpearson 11 July 2010 06:27:58PM *  5 points [-]

How facts Backfire

Mankind may be crooked timber, as Kant put it, uniquely susceptible to ignorance and misinformation, but it’s an article of faith that knowledge is the best remedy. If people are furnished with the facts, they will be clearer thinkers and better citizens. If they are ignorant, facts will enlighten them. If they are mistaken, facts will set them straight.

In the end, truth will out. Won’t it?

Maybe not. Recently, a few political scientists have begun to discover a human tendency deeply discouraging to anyone with faith in the power of information.

There are a number of ways you can run with this article. It is interesting seeing it in the major press. It is also a little ironic that it is presenting facts to try and overturn an opinion (that information cannot be good for trying to overturn an opinion).

In terms of existential risk and thinking better in general. Obviously sometimes facts can overturn opinions but it makes me wonder, where is the organisation that uses non-fact based methods to sway opinion about existential risk. It would make sense if they were seperate, the fact based organisations (SIAI, FHI) need to be honest so that people that are fact-phillic to their message will trust them. I tend to ignore the fact-phobic (with respect to existential risk) people. But if it became sufficiently clear that foom style AI was possible, engineering society would become necessary.

Comment author: SilasBarta 28 July 2010 07:08:02PM *  12 points [-]

Why are Roko's posts deleted? Every comment or post he made since April last year is gone! WTF?

Edit: It looks like this discussion sheds some light on it. As best I can tell, Roko said something that someone didn't want to get out, so someone (maybe Roko?) deleted a huge chunk of his posts just to be safe.

Comment author: JamesAndrix 06 August 2010 07:55:06AM *  -2 points [-]

http://www.damninteresting.com/this-place-is-not-a-place-of-honor

Note to reader: This thread is curiosity inducing, this is affecting your judgement. You might think you can compensate for this bias but you probably won't in actuality. Stop reading anyway. Trust me on this. Edit: Me, and Larks, and ocr-fork, AND ROKO and <snip> [some but not all others]

I say for now because those who know about this are going to keep looking at it and determine it safe/rebut it/make it moot. Maybe it will stay dangerous for a long time, I don't know, but there seems to be a decent chance that you'll find out about it soon enough.

Don't assume it's Ok because you understand the need for friendliness and aren't writing code. There are no secrets to intelligence in hidden comments. (Though I didn't see the original thread, I think I figured it out and it's not giving me any insights.)

Don't feel left out or not smart for not 'getting it' we only 'got it' because it was told to us. Try to compensate for your ego. if you fail, Stop reading anyway.

Ab ernyyl fgbc ybbxvat. Phevbfvgl erfvfgnapr snvy.

http://www.damninteresting.com/this-place-is-not-a-place-of-honor

Comment author: Roko 28 July 2010 08:01:29PM 9 points [-]

I've deleted them myself. I think that my time is better spent looking for a quant job to fund x-risk research than on LW, where it seems I am actually doing active harm by staying rather than merely wasting time. I must say, it has been fun, but I think I am in the region of negative returns, not just diminishing ones.

Comment author: Clippy 28 July 2010 11:32:11PM 25 points [-]

I understand. I've been thinking about quitting LessWrong so that I can devote more time to earning money for paperclips.

Comment author: jsalvatier 19 August 2010 03:12:37PM 0 points [-]

lol

Comment deleted 29 July 2010 12:38:10AM *  [-]
Comment author: Aleksei_Riikonen 30 July 2010 01:18:15PM 4 points [-]

Does not seem very nice to take such an out-of-context partial quote from Eliezer's comment. You could have included the first paragraph, where he commented on the unusual nature of the language he's going to use now (the comment indeed didn't start off as you here implied), and also the later parts where he again commented on why he thought such unusual language was appropriate.

Comment author: cousin_it 29 July 2010 09:06:32AM *  11 points [-]

I'm not them, but I'd very much like your comment to stay here and never be deleted.

Comment author: timtyler 09 September 2010 08:27:16PM 1 point [-]

I'd very much like your comment to stay here and never be deleted.

Your up-votes didn't help, it seems.

Comment author: cousin_it 09 September 2010 08:33:36PM 1 point [-]

Woah.

Thanks for alerting me to this fact, Tim.

Comment deleted 29 July 2010 01:23:11AM [-]
Comment author: Vladimir_Nesov 28 July 2010 10:38:29PM *  46 points [-]

So you've deleted the posts you've made in the past. This is harmful for the blog, disrupts the record and makes the comments by other people on those posts unavailable.

For example, consider these posts, and comments on them, that you deleted:

I believe it's against community blog ethics to delete posts in this manner. I'd like them restored.

Edit: Roko accepted this argument and said he's OK with restoring the posts under an anonymous username (if it's technically possible).

Comment author: Blueberry 29 July 2010 10:36:06AM 23 points [-]

And I'd like the post of Roko's that got banned restored. If I were Roko I would be very angry about having my post deleted because of an infinitesimal far-fetched chance of an AI going wrong. I'm angry about it now and I didn't even write it. That's what was "harmful for the blog, disrupts the record and makes the comments by other people on those posts unavailable." That's what should be against the blog ethics.

I don't blame him for removing all of his contributions after his post was treated like that.

Comment author: [deleted] 29 July 2010 05:37:07AM *  8 points [-]

It's also generally impolite (though completely within the TOS) to delete a person's contributions according to some arbitrary rules. Given that Roko is the seventh highest contributor to the site, I think he deserves some more respect. Since Roko was insulted, there doesn't seem to be a reason for him to act nicely to everyone else. If you really want the posts restored, it would probably be more effective to request an admin to do so.

Comment author: cousin_it 29 July 2010 09:11:13AM *  6 points [-]

It's ironic that, from a timeless point of view, Roko has done well. Future copies of Roko on LessWrong will not receive the same treatment as this copy did, because this copy's actions constitute proof of what happens as a result.

(This comment is part of my ongoing experiment to explain anything at all with timeless/acausal reasoning.)

Comment author: bogus 29 July 2010 09:54:42AM *  3 points [-]

What "treatment" did you have in mind? At best, Roko made a honest mistake, and the deletion of a single post of his was necessary to avoid more severe consequences (such as FAI never being built). Roko's MindWipe was within his rights, but he can't help having this very public action judged by others.

What many people will infer from this is that he cares more about arguing for his position (about CEV and other issues) than honestly providing info, and now that he has "failed" to do that he's just picking up his toys and going home.

Comment author: wedrifid 25 September 2010 07:17:57AM 1 point [-]

This comment is part of my ongoing experiment to explain anything at all with timeless/acausal reasoning.

I just noticed this. A brilliant disclaimer!

Comment author: rhollerith_dot_com 29 July 2010 12:11:27AM *  3 points [-]

Parent is inaccurate: although Roko's comments are not, Roko's posts (i.e., top-level submissions) are still available, as are their comment sections minus Roko's comments (but Roko's name is no longer on them and they are no longer accessible via /user/Roko/ URLs).

Comment author: RobinZ 29 July 2010 03:30:50AM 14 points [-]

Not via user/Roko or via /tag/ or via /new/ or via /top/ or via / - they are only accessible through direct links saved by previous users, and that makes them much harder to stumble upon. This remains a cost.

Comment author: [deleted] 18 August 2010 02:36:24AM 6 points [-]

Could the people who have such links post them here?

Comment author: JoshuaZ 28 July 2010 11:59:54PM *  13 points [-]

I'm deeply confused by this logic. There was one post where due to a potentially weird quirk of some small fraction of the population, reading that post could create harm. I fail to see how the vast majority of other posts are therefore harmful. This is all the more the case because this breaks the flow of a lot of posts and a lot of very interesting arguments and points you've made.

ETA: To be more clear, leaving LW doesn't mean you need to delete the posts.

Comment author: EStokes 30 July 2010 12:41:36PM 0 points [-]

There was one post that could create harm.

FTFY

Comment author: daedalus2u 29 July 2010 12:07:22AM 6 points [-]

I am disapointed. I have just started on LW, and found many of Roko's posts and comments interesting and consilient with my current and to be a useful bridge between aspects of LW that are less consilient. :(

Comment author: Eliezer_Yudkowsky 28 July 2010 07:40:45PM *  7 points [-]

I see. A side effect of banning one post, I think; only one post should've been banned, for certain. I'll try to undo it. There was a point when a prototype of LW had just gone up, someone somehow found it and posted using an obscene user name ("masterbater"), and code changes were quickly made to get that out of the system when their post was banned.

Holy Cthulhu, are you people paranoid about your evil administrator. Notice: I am not Professor Quirrell in real life.

EDIT: No, it wasn't a side effect, Roko did it on purpose.

Comment author: thomblake 02 August 2010 02:36:19PM 3 points [-]

I am not Professor Quirrell in real life.

I'm not sure we should believe you.

Comment author: DanielVarga 30 July 2010 04:48:55AM 10 points [-]

A side effect of banning one post, I think;

In a certain sense, it is.

Comment author: JoshuaZ 29 July 2010 12:22:54AM 6 points [-]

Notice: I am not Professor Quirrell in real life.

Of course, we already established that you're Light Yagami.

Comment author: Unnamed 28 July 2010 07:54:15PM 15 points [-]

Notice: I am not Professor Quirrell in real life.

Indeed. You are open about your ambition to take over the world, rather than hiding behind the identity of an academic.

Comment author: whpearson 28 July 2010 07:43:56PM 12 points [-]

Notice: I am not Professor Quirrell in real life.

And that is exactly what Professor Quirrell would say!

Comment author: Eliezer_Yudkowsky 28 July 2010 08:31:10PM 15 points [-]

Professor Quirrell wouldn't give himself away by writing about Professor Quirrell, even after taking into account that this is exactly what he wants you to think.

Comment author: wedrifid 25 September 2010 07:20:56AM 3 points [-]

Professor Quirrell wouldn't give himself away by writing about Professor Quirrell, even after taking into account that this is exactly what he wants you to think.

Of course <level of reasoning plus one> as you know very well. :)

Comment author: RobinZ 28 July 2010 08:40:20PM 8 points [-]
Comment author: Barry_Cotter 09 July 2010 11:19:52AM 9 points [-]

What's the deal with programming, as a careeer? It seems like the lower levels at least should be readily accessible even to people of thoroughly average intelligence but I've read a lot that leads me to believe the average professional programmer is borderline incompetent.

E.g., Fizzbuzz. Apparently most people who come into an interview won't be able to do it. Now, I can't code or anything but computers do only and exactly what you tell them (assuming you're not dealing with a thicket of code so dense it has emergent properties) but here's what I'd tell the computer to do

# Proceed from 0 to x, in increments of 1, (where x =whatever) If divisible by 3, remainder 0, associate fizz with number If divisible by 5, remainder 0, associate buzz with number, Make ordered list from o to x, of numbers associated with fizz OR buzz For numbers associated with fizz NOT buzz, append fizz For numbers associated with buzz NOT fizz, append fizz For numbers associated with fizz AND buzz, append fizzbuzz #

I ask out of interest in acquiring money, on elance, rentacoder, odesk etc. I'm starting from a position of total ignorance but y'know it doesn't seem like learning C, and understanding Conrete Mathematics and TAOCP in a useful or even deep way would be the work of more than a year, while it would place one well above average in some domains of this activiteity.

Or have I missed something really obvious and important?

Comment author: [deleted] 11 July 2010 03:25:38AM 1 point [-]

Yeah, pretty much anyone who isn't appallingly stupid can become a reasonably good programmer in about a year. Be warned though, the kinds of people who make good programmers are also the kind of people who spontaneously find themselves recompiling their Linux kernel in order to get their patched wifi drivers to work...

Comment author: MartinB 09 July 2010 11:39:54AM *  3 points [-]

i think you overestimate human curiosity for one. Not everyone implements prime searching or Conways game of life for fun. For two. Even those that implement their own fun projects are not necessarily great programmers. It seems there are those that get pointers, and the others. For tree, where does a company advertise? There is a lot of mass mailing going on by not competent folks. I recently read Joel Spolskys book on how to hire great talent, and he makes the point that the really great programmers just never appear on the market anyway.

http://abstrusegoose.com/strips/ars_longa_vita_brevis.PNG

Comment author: whpearson 10 July 2010 01:14:01PM 10 points [-]

Would people be interested in a description of someone with high-social skills failing in a social situation (getting kicked out of a house)? I can't guarantee an unbiased account, as I was a player. But I think it might be interesting, purely as an example where social situations and what should be done are not as simple as sometimes portrayed.

Comment author: WrongBot 13 July 2010 06:17:38PM *  14 points [-]
Comment author: orthonormal 09 July 2010 02:32:55PM 15 points [-]

It seems to me that "emergence" has a useful meaning once we recognize the Mind Projection Fallacy:

We say that a system X has emergent behavior if we have heuristics for both a low-level description and a high-level description, but we don't know how to connect one to the other. (Like "confusing", it exists in the map but not the territory.)

This matches the usage: the ideal gas laws aren't "emergent" since we know how to derive them (at a physics level of rigor) from lower-level models; however, intelligence is still "emergent" for us since we're too dumb to find the lower-level patterns in the brain which give rise to patterns like thoughts and awareness, which we have high-level heuristics for.

Thoughts? (If someone's said this before, I apologize for not remembering it.)

Comment author: [deleted] 29 July 2010 06:03:09PM 5 points [-]

Reading Michael Vassar's comments on WrongBot's article (http://lesswrong.com/lw/2i6/forager_anthropology/2c7s?c=1&context=1#2c7s) made me feel that the current technique of learning how to write a LW post isn't very efficient (read lots of LW, write a post, wait for lots of comments, try to figure out how their issues could be resolved, write another post etc - it uses up lots of the writer's time and lot's of the commentors time).

I was wondering whether there might be a more focused way of doing this. Ie. A short term workshop, a few writers who have been promoted offer to give feedback to a few writers who are struggling to develop the necessary rigour etc by providing a faster feedback cycle, the ability to redraft an article rather than having to start totally afresh and just general advice.

Some people may not feel that this is very beneficial - there's no need for writing to LW to be made easier (in fact, possibly the opposite) but first off, I'm not talking about making writing for LW easier, I'm talking about making more of the writing of a higher quality. And secondly, I certainly learn a lot better given a chance to interact on that extra level. I think learning to write at an LW level is an excellent way of achieving LW aim of helping people to think at that level.

I'm a long time lurker but I haven't even really commented before because I find it hard to jump to that next level of understanding that enables me to communicate anything of value. I wonder if there are others who feel the same or a similar way.

Good idea? Bad idea?

Comment author: cupholder 30 July 2010 12:15:33AM 1 point [-]

Upvoted for raising the topic, but the approach I'd prefer is jimrandomh's suggestion of having all posts pass through an editorial stage before being posted 'for real.'

Comment author: [deleted] 29 July 2010 07:52:42PM *  3 points [-]

We could use a more structured system, perhaps. At this point, there's nothing to stop you from writing a post before you're ready, except your own modesty. Raise the threshold, and nobody will have to yell at people for writing posts that don't quite work.

Possibilities:

  1. Significantly raise the minimum karma level.

  2. An editorial system: a more "advanced" member has to read your post before it becomes top-level.

  3. A wiki page about instructions for posting. It should include: a description of appropriate subject matter, formatting instructions, common errors in reasoning or etiquette.

  4. A social norm that encourages editing (including totally reworking an essay.) The convention for blog posts on the internet in general mandates against editing -- a post is supposed to be an honest record of one's thoughts at the time. But LessWrong is different, and we're supposed to be updating as we learn from each other. We could make "Please edit this" more explicit.

A related thought on karma -- I have the suspicion that we upvote more than we downvote. It would be possible to adjust the site to keep track of each person's upvote/downvote stats. That is, some people are generous with karma, and some people give more negative feedback. We could calibrate ourselves better if we had a running tally.

Comment author: jimrandomh 29 July 2010 09:03:23PM *  4 points [-]

Kuro5hin had an editorial system, where all posts started out in a special section where they were separate and only visible to logged in users. Commenters would label their comments as either "topical" or "editorial", and all editorial comments would be deleted when the post left editing; and votes cast during editing would determine where the post went (front page, less prominent section, or deleted).

Unfortunately, most of the busy smart people only looked at the posts after editing, while the trolls and people with too much free time managed the edit queue, eventually destroying the quality of the site and driving the good users away. It might be possible to salvage that model somehow, though.

We upvote much more than we downvote - just look at the mean comment and post scores. Also, the number of downvotes a user can make is capped at their karma.

Comment author: cupholder 30 July 2010 12:43:57AM 0 points [-]

Enthusiastically seconded.

The only change I'd make is to hide editorial comments when the post leaves editing (instead of deleting them), with a toggle option for logged-in users to carry on viewing them.

Unfortunately, most of the busy smart people only looked at the posts after editing, while the trolls and people with too much free time managed the edit queue, eventually destroying the quality of the site and driving the good users away. It might be possible to salvage that model somehow, though.

I think it is. There are several tricks we could use to give busy-smart people more of a chance to edit posts.

On Kuro5hin, if I remember right, posts left the editing queue automatically after 24 hours, either getting posted or kicked into the bit bucket. Also, users could vote to push the story out of the queue early. If Less Wrong reimplemented this system, we could raise the threshold for voting a story out of editing early, or remove the option entirely. We could even lengthen the period it spends in the editing stage. (This would also have the advantage of filtering out impatient people who couldn't wait 3 days or whatever for their story to post.)

LW's also just got a much smaller troll ratio than Kuro5hin did, which would help a lot.

Comment author: [deleted] 30 July 2010 08:09:25AM 2 points [-]

It seems like there's at least some interesting in doing something to deal with helping people to develop posting skills through a means other than simply writing lots of articles and bombarding the community with them. The editorial system seems like it has a lot of promising aspects.

The main thing is, it seems more valuable to implement a weak system than to simply talk about implementing a stronger system so whether the editorial system is the best that can be done depends on whether the people in charge of the community are interested in implementing it.

If they turn out to not be, I still wonder whether there's a few people out there that can volunteer to help make posts better and a few people who can volunteer to not bombard LW but instead to develop their skills in a quieter way (nb: that doesn't refer to anyone in particular except, potentially, myself). Personally, I still think that would be useful, even if suboptimal.

Does the lack of a response from EY imply that he's not interested in that sort of change and, if so, is it EY who would be the one to make the decision?

Comment author: rhollerith_dot_com 30 July 2010 12:50:39PM 2 points [-]

EY has stated in the past that the reason most suggestions do not result in a change in the web site is that no programmer (or no programmer that EY and EY's agents trust) is available to make the change.

Also, I think he reads only a fraction of LW these months.

Comment author: NancyLebovitz 30 July 2010 10:09:15AM 1 point [-]

Meanwhile, it would be probably be worthwhile if people would write about any improvement they've made in their ability to think and to convey their ideas, whether it's deliberate or the result of being in useful communities.

I'm not sure that I've made improvements myself-- I think my strategy (which it took a while to make conscious) of writing for the my past self who didn't have the current insight has served me pretty well-- that and General Semantics (a basic understanding that the map isn't the territory).

If I were writing for a general audience, I think I'd need to learn about appropriate levels of redundancy.

Comment author: xamdam 30 July 2010 06:21:24PM *  3 points [-]

Significantly raise the minimum karma level.

Another technical solution. Not trivial to implement, but also contains significant side benefits.

  • Find some subset of sequences and other highly ranked posts that are "super-core" and has large consensus not just in karma, but also in agreement by high-karma members (say top ten).
  • Create a multiple choice test and implement it online, which is external technologies exist for already I am sure.

Some karma + passing test gets top posting privileges.

I have to confess I abused my newly acquired posting privileges and probably diluted the site's value with a couple of posts. Thank goodness they were rather short :). I took the hint though and took to participating in the comment discussion and reading sequences until I am ready to contribute at a higher level.

Comment author: WrongBot 29 July 2010 07:07:25PM 3 points [-]

Is there any consensus about the "right" way to write a LW post? I see a lot of diversity in style, topic, and level of rigor in highly-voted posts. I certainly have no good way to tell if I'm doing it right; Michael Vassar doesn't think so, but he's never had a post voted as highly as my first one was. (Voting is not solely determined by post quality; this is a big part of the problem.)

I would certainly love to have a better way to get feedback than the current mechanisms; it's indisputable that my writing could be better. Being able to workshop posts would be great, but I think it would be hard to find the right people to do the workshopping; off the top of my head I can really only think of a handful of posters I'd want to have doing that, and I get the impression that they're all too busy. Maybe not, though.

(I think this is a great idea.)

Comment author: Larks 30 July 2010 10:35:34PM 2 points [-]

Michael Vassar doesn't think so, but he's never had a post voted as highly as my first one was.

I didn't think there was anything particularly wrong with your post, but newer posts get a much higher level of karma than old ones, which must be taken into account. Some of the core sequence posts have only 2 karma, for example.

Comment author: WrongBot 31 July 2010 12:22:34AM *  1 point [-]

Agreed, and that is exactly the sort of factor I was alluding to in my parenthetical.

Comment author: [deleted] 29 July 2010 07:22:07PM 1 point [-]

I suppose there's a few options including: See who's willing to run workshops and then once that's known, people can choose whether to join or not. If none of the top contributors could be convinced to run them then they may still be useful for people of a lower level of post writing ability (which I suspect is where I am, at the moment). The other thing is, even regardless of who ran the workshops, the ability to get faster feedback and to redraft gives a chance to develop an article more thoroughly before posting it properly and may give a sense of where improvements can be made and where the gaps in thinking and writing are.

But I guess that questions like that are secondary to the question of whether enough people think it's a good enough idea and whether anyone would be willing to run workshops at all.

Comment author: gwern 29 July 2010 10:19:37AM 2 points [-]

Sparked by my recent interested in PredictionBook.com, I went back to take a look at Wrong Tomorrow, a prediction registry for pundits - but it's down. And doesn't seem to have been active recently.

I've emailed the address listed on the original OB ANN for WT, but while I'm waiting on that, does anyone know what happened to it?

Comment author: gwern 30 July 2010 07:18:12AM 1 point [-]

I got a reply from Maciej Ceglowski today; apparently WT was taken down to free resources for another site. It's back up, for now.

(I have to say, seriously going through prediction sites is kind of discouraging. The free ones all seem to be marginal and very unpopular, while the commercial ones aren't usable in the long run and are too fragmented.)

Comment author: [deleted] 29 July 2010 05:36:35PM 1 point [-]

In relation to these sorts of sites, what's a normal level of success on this sort of thing for LW readers? If people chose ten things now that they thought were fifty percent likely to occur by the end of next week, would exactly five of them end up happening?

Comment author: gwern 29 July 2010 06:10:05PM 1 point [-]

I don't know of any LWers who have used PB enough to really have a solid level of normal. My own PB stats are badly distorted by all my janitorial work.

I suspect not many LWers have put in the work for calibration; at least, I see very few scores posted at http://lesswrong.com/lw/1f8/test_your_calibration/

So, I couldn't say. It would be nice if we were all calibrated. (But incidentally you can be perfectly calibrated and not have 5/10 of 50% items happen; it could just be a bad week for you.)

Comment author: Matt_Simpson 29 July 2010 06:06:04AM 2 points [-]

UDT/TDT understanding check: Of the 3 open problems Eliezer lists for TDT, the one UDT solves is counterfactual mugging. Is this correct? (A yes or no is all I'm looking for, but if the answer is no, an explanation of any length would be appreciated)

Comment author: Eliezer_Yudkowsky 29 July 2010 06:01:48PM 3 points [-]

Yes.

Comment author: SilasBarta 29 July 2010 06:04:49PM 1 point [-]

So TDT fails on counterfactual mugging, as far as you understand it to work, and the reasoning I gave here is in error?

Comment author: Elias_Kunnas 28 July 2010 08:59:14AM *  1 point [-]

Something I wonder about just how is how many people on LW might have difficulties with the metaphors used.

An example: In http://lesswrong.com/lw/1e/raising_the_sanity_waterline/, I still haven't quite figured what a waterline is supposed to mean in that context, or what kind of associations the word has, and neither had someone else I asked about that.

Comment author: Sniffnoy 28 July 2010 09:12:29AM 4 points [-]

I think "waterline" here should be taken in the same context as "A rising tide floats all boats".

Comment author: [deleted] 28 July 2010 01:30:57AM 1 point [-]

Are there any Less Wrongers in the Grand Rapids area that might be interested in meeting up at some point?

Comment author: Psy-Kosh 01 August 2010 05:45:56AM 1 point [-]

Grand Rapids, MI, you mean?

I'm in Michigan, but West Bloomfield, so a couple hours away, but still, if we found some more MI LWers, maybe.

Comment author: ata 27 July 2010 09:30:51PM *  1 point [-]

Is it my imagination, or is "social construct" the sociologist version of "emergent phenomenon"?

Comment author: jimrandomh 27 July 2010 05:46:05AM *  1 point [-]

This is my PGP public key. In the future, anything I write which seems especially important will be signed. This is more for signaling purposes than any fear of impersonation -- signing a post is a way to strongly signal its seriousness.

-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.7 (Cygwin)
mQGiBExOb4IRBAClNdK7kU0hDjEnR9KC+ga8Atu6IJ5pS9rKzPUtV9HWaYiuYldv
VDrMIFiBY1R7LKzbEVD2hc5wHdCUoBKNfNVaGXkPDFFguJ2D1LRgy0omHaxM7AB4
woFmm4drftyWaFhO8ruYZ1qSm7aebPymqGZQv/dV8tSzx8guMh4V0ree3wCgzaVX
wQcQucSLnKI3VbiyZQMAQKcEAI9aJRQoY1WFWaGDsAzCKBHtJIEooc+3+/2STL1R
0QVY/W6rBtJhSxiikBs70oVUt3+gzG2zw8HQMA+eF6ailRXyelUn6EUIm+OVPruh
3TiiNl2fVeF8CbmU08tseonPgcQXTKDXdD+/vqe2STF33Pl5h5fUfNISkho1+VFe
WplpA/wPRAHLKxBnRY42jn32s/XqTtTxii52kp0FELCe4X4Ya1tji9D7TEH4AU0A
wg5piyfrgDYTw+MvhI9KAL+NKLa4bgEe8dETZPl10TJt+zvdHqknIb92NjI8Vsb2
/gfEnT7iaJLC4eUcIExgKBaeq/TVsoelkHfN5h2y06mCKzA5CrQkSmFtZXMgQSBC
YWJjb2NrIDxqaW1AamltcmFuZG9taC5vcmc+iGAEExECACAFAkxOb4ICGwMGCwkI
BwMCBBUCCAMEFgIDAQIeAQIXgAAKCRBjbSzQYXDwqpgoAKCRNLiDtqetV8MXJm21
+GaPpkJa6QCeJ0rUccQbdGpwzmb6HDRd4lf5Uf25Ag0ETE5vghAIAIF8dtzaF81g
CywT5v8pxnyb/0cPLtv23vR98VPbmqZxbojrdltr6AOJM6FodlszBZmlCBX2bLP5
drpp9HZ/2g5O9VeWCqPbkAaZFRhMlSY6Zkq77q592+XSk9Bkb1JUWIMsEeoR6f8d
PVH986mgIFzOOZPn3L3H4v3sCnfaMvuSDZxNHh/s+vBYTTc87wrFNzv0c7WZKtum
1nildgrqK4nksMFg5+rnYdAhooSK27WERTd/WH2QNDXTyGN5HgyPKMz9tOVyEsTC
8JvVDTFjMDzfSFkQv58l6/66HgdBaEZkDbNsgL1Vw6RKDaWFTvt+bZmaLpOGDzOf
YtDZc2KkjxsAAwUH/2QV+AG0x0LR83oAESwpe1YSMDzYs1rP7OBAoTyZ8OAz+TqL
iY5v9MJmzu9XI/TXMs7kC1qCwV7tCCPExJeIQKCj9m6jiCDLPECV5bXC+AZ5t0dU
mIoBDHzJee2DOvO4gjzNM7gOMXi0drxCJGN5JRh5k7kByV9lF7yDZFWkJc7gUc9C
fQSdSnwZelxRDYMVPjnkwmNrOq0LPX27PlfVH+0YjAL1x87WTplfERd20eWk9ifA
1SBofuJlZsl1HFbY0zezgOvv6nJDANN9r/y77dbdV2DQJ2rXnTYGcpo9oA8o1/AF
AbGGgUr/0dJMMKrhpdsJZ77Mub3HRZEfEzUlFZaISQQYEQIACQUCTE5vggIbDAAK
CRBjbSzQYXDwquHMAJ43MgAxTE/2fsV+THKFJ1agjsHamACfVeL7pNlDC+fu3GbB
gZ24idwJE1o=
=PiOo
-----END PGP PUBLIC KEY BLOCK-----
Comment author: bogus 27 July 2010 07:18:35AM 1 point [-]

You may want to copy this key block to a user page on the LW wiki, where it can be easily referenced in the future.

Comment author: khafra 27 July 2010 05:45:22PM 1 point [-]

That would also have the advantage of hopefully requiring different credentials to access, so it would be marginally harder to change the recorded public key while signing a forged post with it.

Comment author: bogus 27 July 2010 06:15:34PM 2 points [-]

Not just harder; it would be all but impossible since the wiki keeps a hstory of all changes (unlike LW posts) and jimrandomh is not a wiki sysop.

Comment author: SilasBarta 27 July 2010 04:03:05AM 3 points [-]

Slashdot having an epic case of tribalism blinding their judgment? This poster tries to argue that, despite Intelligent Design proponents being horribly wrong, it is still appropriate for them to use the term "evolutionist" to refer to those they disagree with.

The reaction seems to be basically, "but they're wrong, why should they get to use that term?"

Huh?

Comment author: JoshuaZ 27 July 2010 04:05:34AM 2 points [-]

There's a legitimate reason to not want ID proponents and creationists to use the term "evolutionist" although it isn't getting stated well in that thread. In particular, the term is used to portray evolution as an ideology with ideological adherents. Thus, the use of the term "evolutionism" as well. It seems like the commentators in question have heard some garbled bit about that concern and aren't quite reproducing it accurately.

Comment author: SilasBarta 27 July 2010 04:13:45AM 1 point [-]

Thanks for the reply.

Wouldn't your argument apply just the same to any inflection of a term to have "ism"?

If you and I are arguing about whether wumpuses are red, and you think they are, is it a poor portrayal to refer to you as a "reddist"? Does that imply it's an ideology, etc?

What would you suggest would be a better term for ID proponents to use?

Comment author: JoshuaZ 27 July 2010 04:16:31AM 1 point [-]

I presume someone who took this argument seriously would say that either a) that's its ok to use the term if they stop making ridiculous claims about ideology or b) suggest "mainstream biologists" or "evolution proponents" both of which are wordy but accurate (I don't think that even ID proponents would generally disagree with the point that they aren't the mainstream opinion among biologists.)

Comment author: SilasBarta 27 July 2010 04:21:18AM 1 point [-]

Do you expect that, in general, people should never use the form "X-ist", but rather, use "X proponent"? Should evolution proponents use "Intelligent Design advocate" and "creation advocate"?

Comment author: JoshuaZ 27 July 2010 04:34:34AM *  2 points [-]

If a belief doesn't fit an ideological or religious framework, I think that X-ist and ism are often bad. I actually use the phrases "ID proponent" fairly often partially for this reason. I'm not sure however that this case is completely symmetric given that ID proponents self-identify as part of the "intelligent design movement" (a term used for example repeatedly by William Dembski and occasionally by Michael Behe.)

Comment author: ata 27 July 2010 04:21:51AM *  3 points [-]

Slashdot having an epic case of tribalism blinding their judgment?

I haven't regularly read Slashdot in several years, but I seem to recall that it was like that pretty much all the time.

Comment author: bogus 26 July 2010 03:37:58PM 6 points [-]

Daniel Dennett and Linda LaScola on Preachers who are not believers:

There are systemic features of contemporary Christianity that create an almost invisible class of non-believing clergy, ensnared in their ministries by a web of obligations, constraints, comforts, and community. ... The authors anticipate that the discussion generated on the Web (at On Faith, the Newsweek/Washington Post website on religion, link) and on other websites will facilitate a larger study that will enable the insights of this pilot study to be clarified, modified, and expanded.

Comment author: Unknowns 26 July 2010 10:40:51AM 3 points [-]

A second post has been banned. Strange: it was on a totally different topic from Roko's.

Comment author: Eliezer_Yudkowsky 26 July 2010 12:02:50PM 2 points [-]

Still the sort of thing that will send people close to the OCD side of the personality spectrum into a spiral of nightmares, which, please note, has apparently already happened in at least two cases. I'm surprised by this, but accept reality. It's possible we may have more than the usual number of OCD-side-of-the-spectrum people among us.

Comment author: xamdam 26 July 2010 01:55:27PM 3 points [-]

Was the discussion in question epistemologicaly interesting (vs. intellectual masturbation)? If so, how many OCD personalities joining the site would call for closing the thread? I am curious about decision criteria. Thanks.

As an aside, I've had some SL-related psychological effects, particularly related to material notion of self: a bit of trouble going to sleep, realizing that logically there is little distinction from death-state. This lasted a short while, but then you just learn to "stop worrying and love the bomb". Besides "time heals all wounds" certain ideas helped, too. (I actually think this is an important SL, though it does not sit well within the SciFi hierarchy).

This worked for me, but I am generally very low on the OCD scale, and I am still mentally not quite ready for some of the discussions going on here.

Comment author: Apprentice 26 July 2010 02:35:32PM 4 points [-]

If so, how many OCD personalities joining the site would call for closing the thread? I am curious about decision criteria. Thanks.

It is impossible to have rules without Mr. Potter exploiting them.

Comment author: NancyLebovitz 26 July 2010 12:38:37PM 1 point [-]

Is it OCD or depression? Depression can include (is defined by?) obsessively thinking about things that make one feel worse.

Comment author: JoshuaZ 26 July 2010 01:13:14PM 1 point [-]

Depressive thinking generally focuses on short term issues or general failure. I'm not sure this reflects that. Frankly, it seems to come across superficially at least more like paranoia, especially of the form that one historically saw (and still sees) in some Christians worrying about hell and whether or not they are saved. The reaction to these threads is making me substantially update my estimates both for LW as a rational community and for our ability to discuss issues in a productive fashion.

Comment author: jimrandomh 26 July 2010 12:27:29PM 1 point [-]

Yep. But not unexpectedly this time; homung posted in the open thread that he was looking for 20 karma so he could post on the subject, and I sent him a private message saying he shouldn't, which he either didn't see or ignored.

Comment author: cousin_it 26 July 2010 11:25:11AM *  3 points [-]

(comment edited)

I wonder why PlaidX's post isn't getting deleted - the discussion there is way closer to the forbidden topic.

Comment author: xamdam 26 July 2010 09:50:33AM 1 point [-]
Comment author: cousin_it 26 July 2010 01:05:35PM *  1 point [-]

Yep - I'm having some fun there right now, my nick is want_to_want. Anyone knowledgeable in psych research, join in!

Comment author: simplicio 26 July 2010 05:01:49AM 1 point [-]

So I was pondering doing a post on the etiology of sexual orientation (as a lead-in to how political/moral beliefs lead to factual ones, not vice versa).

I came across this article, which I found myself nodding along with, until I noticed the source...

Oops! Although they stress the voluntary nature of their interventions, NARTH is an organization devoted to zapping the fabulous out of gay people, using such brilliant methodology as slapping a rubber band against one's wrist every time one sees an attractive person with the wrong set of chromosomes. From the creators of the rhythm method.

Look at that article, though. And look at the site's mission statement, etc. while you're at it. The reason I posted this is because I was disturbed by how well the dark side has done here, rhetorically. And also by how they have used true facts (homosexuality is definitely not even close to 100% innate) to argue for something which is (1) morally questionable at best, given the possibility of coercion and the fact that you're fixing something that's not broken, (2) not even efficacious (no, I am not thrilled with that source).

Comment author: WrongBot 26 July 2010 06:03:09PM 2 points [-]

For what it's worth, rubber band snapping is a pretty popular thought-stopping technique in CBT for dealing with obsessive-type behaviors, though I believe there's some debate over how effective it is. I know it's been used to address morbid jealousy, though I don't know to what extent or if more scientific studies have been conducted.

Comment author: NancyLebovitz 25 July 2010 04:16:48PM *  5 points [-]

Rationality applied to swimming

The author was a lousy swimmer for a long time, but got respect because he put in so much effort. Eventually he became a swim coach, and he quickly noticed that the bad swimmers looked the way he did, and the good swimmers looked very different, so he started teaching the bad swimmers to look like the good swimmers, and began becoming a better swimmer himself.

Later, he got into the physics of good swimming. For example, it's more important to minimize drag than to put out more effort.

I'm posting this partly because it's always a pleasure to see rationality, partly because the most recent chapter of Methods of Rationality reminded me of it, and mostly because it's a fine example of clue acquisition.

Comment author: DanielVarga 24 July 2010 07:51:29PM *  1 point [-]

Do you like the LW wiki page (actually, pages) on Free Will? I just wrote a post to Scott Aaronson's blog, and the post assumed an understanding of the compatibilist notion of free will. I hoped to link to the LW wiki, but when I looked at it, I decided not to, because the page is unsuitable as a quick introduction.

EDIT: Come over, it is an interesting discussion of highly LW-relevant topics. I even managed to drop the "don't confuse the map with the territory"-bomb. As a bonus, you can watch the original topic of Scott's post: His diavlog with Anthony Aguirre

Comment author: NancyLebovitz 23 July 2010 09:15:55PM *  4 points [-]

Thought without Language Discussion of adults who've grown up profoundly deaf without having been exposed to sign language or lip-reading.

Edited because I labeled the link as "Language without Thought-- this counts as an example of itself.

Comment author: RobinZ 24 July 2010 12:25:48AM 1 point [-]

That is amazingly interesting.

Comment author: Peter_Lambert-Cole 23 July 2010 06:06:33PM 1 point [-]

There is something that bother's me and I would like to know if it bothers anyone else. I call it "Argument by Silliness"

Consider this quote from the Allais Malaise post: "If satisfying your intuitions is more important to you than money, do whatever the heck you want. Drop the money over Niagara Falls. Blow it all on expensive champagne. Set fire to your hair. Whatever."

I find this to be a common end point when demonstrating what it means to be rational. Someone will advance a good argument that correctly computes/deduces how you should act, given a certain goal. In the post quoted above, that would be maximizing your money. And in order to get their point across, they cite all the obviously silly things you could otherwise do. To a certain extent, it can be more blackmail than argument, because your audience does not want to seem a fool and so he dutifully agrees that yes, it would be silly to throw your money off of Niagara Falls and he is certainly a reasonable man who would never do that so of course he agrees with you.

Now, none of the intelligent readers on LW need to be blackmailed this way because we all understand what rationality demands of us and we respond to solid arguments not rhetoric. And Eliezer is just using that bit of trickery to get a basic point across to the uninitiated.

But the argument does little to help those who already grasp the concept improve their understanding. Absurdity does not mean you have correctly implemented a "reductio ad absurdum" technique. You have to be careful because he appealed to something that is self-evidently absurd and you should be wary of anything considered self-evident. Actually, I think it is more a case of being commonly accepted as absurd, but you should be just as wary of anything commonly accepted as silly. And you should be careful about where you think it is the former but it's actually the later.

The biggest problem, however, is that silly is a class in which we put things that can be disregarded. Silly is not a truth statement. It is a value statement. It says things are unimportant, not that they are untrue. It says that according to a given standard, this thing is ranked very low, so low in fact that it is essentially worthless.

Now, disregarding things is important for thinking. It is often impossible to think through the whole problem, so we at first concern ourselves with just a part and put the troublesome cases aside for later. In the Allais Malaise post, Eliezer was concerned just with the minor problem of "How do we maximize money under these particular constraints?" and separating out intuitions was part of having a well-defined, solvable problem to discuss.

But the silliness he cites only proves that the two standards - maximizing money and satisfying your intuitions - conflict in a particular case. It tells you little about any other case or the standards themselves.

The point I most want to make is "Embrace what you find silly," but since this comment has gone on very long, so I am going to break this up into several postings.

Comment author: NancyLebovitz 23 July 2010 07:22:24PM 1 point [-]

Yeah-- argument by silliness (I think I'd describe it as finding something about the argument which can be made to sound silly) is one of the things I don't like about normal people.

Comment author: Peter_Lambert-Cole 23 July 2010 07:50:50PM 1 point [-]

That's why it can be such an effective tactic when persuading normal people. You can get them to commit to your side and then they rationalize themselves into believing it's truth (which it is) because they don't want to admit they were conned.

Comment author: Eneasz 23 July 2010 03:40:20PM 1 point [-]

Luke Muehlhauser just posted about Friendly AI and Desirism at his blog. It tends to have a more general audience than LW, comments posted there could help spread the word. Desirism and the Singularity