As far as I can tell, some of the most recent conversations to have the most uncvil remarks are conversations involving whether AI risk is a serious problem and if so what should be done about it. The thread on Luke's discussion with Pei Wang seems to be the most recent example. This also appears to be more common in threads that discuss mainstream attitudes about AI risk and where they disagrees with common LW opinion. Given that, I'm becoming worried that AI risk estimates may be becoming a tribalized belief category. Should we worry that AI risk is becoming or has become a mindkiller?
Fun with Umeshisms:
Reading this argument...
...For most of human history, physicians were incapable of effectively treating serious diseases. Indeed their efforts frequently resulted in their unfortunate patients dying and suffering at far higher rates than they would have otherwise endured. Physicians only gained the ability to have any worthwhile impact on the course of major illnesses in the 1940s- largely due to technological improvements secondary to ww1 and ww2 which included the development of new drugs (sulfonamides, antibiotics, first anti-cancer drugs, first effective anti-hypertensive drugs, better vaccines etc).
Note that physicians have had almost zero input in developing all of the drugs and technology which now allow them to be somewhat effective in practising medicine.
Since a significant number of people who get into medical school have always been money and power-hungry, but lesser and timid, CONmen- they took full advantage of the situation to market themselves as mini-gods who required tons of money to exert their magic on their patients. Make no mistake.. few people who enter that profession care about anything beyond enriching themselves and bossing around sick or dying people.
When
Q: How many LessWrongians does it take to change a lightbulb?
A1: In some Everett branches the lightbulb is still undamaged. If you kill yourself in all remaining branches, the problem with lightbulb is solved. (While you are at it, why not also buy a lottery ticket, so you don't have to worry about broken lightbulbs anymore?)
A2: Changing a lightbulb would bring us closer to Singularity, and until we solve the problem of Friendly AI, this would be a dangerous thing to do.
A3: One LessWrongian writes an article about why it is rational to change the lightbulb, fifty LessWrongians upvote the article, forty LessWrongians downvote it. A discussion has soon over 200 comments, most of them discussing when it is correct to upvote or downvote something, and what could we do to avoid karma assassinations. Then a new chapter of HP:MoR is published, and the whole lightbulb topic is quickly forgotten.
A4: Eliezer already wrote an article about lightbulbs in 2007. What, you mean to really change a lightbulb? Please stop saying that; it sounds a bit cultish to me.
Also see here.
A question for rationalist parents (and anyone else who has ideas): are there good child-accessible rational arguments for why do right?
Me: Please do X.
Child: No.
Me: You know it's the right thing to do.
Child: Yes.
Me: Well?
Child: I don't want to.
Me: ???
Old discussion that I'd like to see revived if for no other reason than I think the subject matter is fantastic: Taking Occam Seriously.
I wouldn't have seen it if I hadn't tried to go through all of LW's archive, so I hope someone sees it for the first time by virtue of reading the open threads.
[Meta] I hope it's okay that I posted the new open thread. Don't know what the procedure is, if any. I wanted to post something, but saw the last open thread was out of date. Please moderate/correct as appropriate.[/Meta]
you broke the code
Edit: Not really, anyone can make the open threads. But I've been doing it for a little while and I think it's a little strange that someone else did it when I'm only two hours late. C'est la vie.
How do you pronounce Eliezer's name? I've heard his name pronounced a number of ways. Originally, I thought it was pronounced El-eye-zer. Then I watched a video where I think it was pronounced El-ee-ay-zer. And today I watched another where Robin Hanson pronounced it as El-ee-eye-zer. So which is it? I doubt he really cares that much, but I'd like to know I'm not pronounceable it wrong when I tell people about him.
Are most people here transhumanists? If you are, do you have some specific transhumanist wishes? What about transhumanist possibilities that you want to avoid?
(this is not too political, I hope: just general talk about social attitudes)
I think I don't understand much of social-conservative sentiment - not policy suggestions, but the general thurst of it.
For example, people who exhibit it often use the term "permissive" as somewhat of a perjorative for several of today's societies. I don't get it: "permissive" towards what - stuff like drug use? But they don't typically use any qualifiers; they just seemingly say that not erring on the side of banning any slightly controversial thing is automa...
This was recently posted in the Server Sky thread: The Political Economy of Very Large Space Projects. The title kind of says it all. Basically, whenever anyone tries to put forward a Very Large Space Project they tend to gloss over the political costs and realities, hence they don't actually get done. This seems like a pretty clear cut case of Far Mode bias to me. Rationalists trained to recognize and account for this may have a better chance of getting things done.
I was recently reading through LW discussions about OKCupid. Those discussions (as well as some other factors) prompted me to make a profile. If anyone cares to critique, please do so. I have my own opinions on what I've done well and what I need to improve on, but I'll keep them to myself for the time being. I don't want to anchor your reactions.
Making a few minor edits, but I consider this first draft just about done. If you'd like me to review your profile, or if by serendipity you are interested in me and live close by, then do let me know.
I'd like to congratulate LW on the fact that five of the seven most recent posts are on negative karma. Good work people! Keep up the selection pressure!
Can someone concisely explain why this is true:
Concerning the application of rationality in one's own life: In the mini-camp thread, Brandon Reinhart gives a very detailed summary of how he improved using the methods taught there (here). I'm sure the material that is taught in the camps can be found somewhere on the site.
However, the masses of material are hard to comb through, and my google-fu wasn't sufficient to identify the relevant ones. Can anyone point me to sequences that teach that kind of stuff?
Especially in light of the recent thread which seemed to conclude that Alcor is superior to CI I've been thinking about the discrepancy between Alcor membership fees and the cost of life insurance. Membership fees are a fixed rate independent of age/probability of death, while life insurance varies. This means that the (cost : likelihood of death) ratio is far higher for younger prospective cryonauts, and this triggers my sense of economic unfairness/inefficiency.
For instance, with data from Alcor, assuming neurosuspension and extra as I live in the UK:
Mem...
Has anyone read:
Woolley, A. W., Chabris, C. F., Pentland, A., Hashmi, N. and Malone, T. W. (2010), “Evidence for a Collective Intelligence Factor in the Performance of Human Groups,” Science, 330, 686–688.
It seems to be relevant to various LW tropes, but to actually read it myself I'd have to talk to somebody (the librarian who runs my university's journal repository) and paying that kind of price would be massively depressing if the above paper turned out to be as crappy as the paper I just read that cited it.
It's already massively depressing that even b...
Is there any empirical evidence that humans have bounded utility functions? Any evidence against?
TeXmacs workshop videos (the WYSIWYG software for creating documents and interfacing with math software -- Word / LaTeX replacement).
I remember seeing discussion about sample bias in studies about depression. Specifically about the self-selction effects for people who respond to advertisements. Does anyone know what thread this was in.
Floating the idea of a New Jersey LW meetup. (Particularly for people in Somerset and the surrounding counties.) Is there any interest?
Meta-LW question: What does the comment sorting system actually do? I assumed it was Reddit's "best" system, but then noticed some highly-upvoted, seemingly non-criticized comments below worse-seeming ones. Am I just crazy?
I've been analyzing the reasons for my dislike of the MWI lately. I initially thought that it was because it was untestable and so no better than any other interpretation. But this wasn't a good enough explanation for an emotional response ("dislike" is an emotional response). So, after some digging, I have realized that what I dislike is not the anti-Popperianism of it, but the process of futile arguing itself, where convincing the other side, or getting convinced, or building a better model based on the two original positions is not an option. And Copenhagen vs MWI is one of those debates. Now, if only I could figure out why I am still commenting about it...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.