This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
It seems to me that "emergence" has a useful meaning once we recognize the Mind Projection Fallacy:
We say that a system X has emergent behavior if we have heuristics for both a low-level description and a high-level description, but we don't know how to connect one to the other. (Like "confusing", it exists in the map but not the territory.)
This matches the usage: the ideal gas laws aren't "emergent" since we know how to derive them (at a physics level of rigor) from lower-level models; however, intelligence is still "emergent" for us since we're too dumb to find the lower-level patterns in the brain which give rise to patterns like thoughts and awareness, which we have high-level heuristics for.
Thoughts? (If someone's said this before, I apologize for not remembering it.)
Why are Roko's posts deleted? Every comment or post he made since April last year is gone! WTF?
Edit: It looks like this discussion sheds some light on it. As best I can tell, Roko said something that someone didn't want to get out, so someone (maybe Roko?) deleted a huge chunk of his posts just to be safe.
I've deleted them myself. I think that my time is better spent looking for a quant job to fund x-risk research than on LW, where it seems I am actually doing active harm by staying rather than merely wasting time. I must say, it has been fun, but I think I am in the region of negative returns, not just diminishing ones.
So you've deleted the posts you've made in the past. This is harmful for the blog, disrupts the record and makes the comments by other people on those posts unavailable.
For example, consider these posts, and comments on them, that you deleted:
I believe it's against community blog ethics to delete posts in this manner. I'd like them restored.
Edit: Roko accepted this argument and said he's OK with restoring the posts under an anonymous username (if it's technically possible).
And I'd like the post of Roko's that got banned restored. If I were Roko I would be very angry about having my post deleted because of an infinitesimal far-fetched chance of an AI going wrong. I'm angry about it now and I didn't even write it. That's what was "harmful for the blog, disrupts the record and makes the comments by other people on those posts unavailable." That's what should be against the blog ethics.
I don't blame him for removing all of his contributions after his post was treated like that.
Not via user/Roko or via /tag/ or via /new/ or via /top/ or via / - they are only accessible through direct links saved by previous users, and that makes them much harder to stumble upon. This remains a cost.
I understand. I've been thinking about quitting LessWrong so that I can devote more time to earning money for paperclips.
I'm deeply confused by this logic. There was one post where due to a potentially weird quirk of some small fraction of the population, reading that post could create harm. I fail to see how the vast majority of other posts are therefore harmful. This is all the more the case because this breaks the flow of a lot of posts and a lot of very interesting arguments and points you've made.
ETA: To be more clear, leaving LW doesn't mean you need to delete the posts.
Allow me to provide a little context by quoting from a comment, now deleted, Eliezer made this weekend in reply to Roko and clearly addressed to Roko:
I don't usually talk like this, but I'm going to make an exception for this case.
Listen to me very closely, you idiot.
[paragraph entirely in bolded caps.]
[four paragraphs of technical explanation.]
I am disheartened that people can be . . . not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.
This post was STUPID.
Although it does not IMHO make it praiseworthy, the above quote probably makes Roko's decision to mass delete his comments more understandable on an emotional level.
In defense of Eliezer, the occasion of Eliezer's comment was one in which IMHO strong emotion and strong language might reasonably be seen as appropriate.
If either Roko or Eliezer wants me to delete (part of all of) this comment, I will.
EDIT: added the "I don't usually talk like this" paragraph to my quote in repsonse to criticism by Aleksei.
I see. A side effect of banning one post, I think; only one post should've been banned, for certain. I'll try to undo it. There was a point when a prototype of LW had just gone up, someone somehow found it and posted using an obscene user name ("masterbater"), and code changes were quickly made to get that out of the system when their post was banned.
Holy Cthulhu, are you people paranoid about your evil administrator. Notice: I am not Professor Quirrell in real life.
EDIT: No, it wasn't a side effect, Roko did it on purpose.
Notice: I am not Professor Quirrell in real life.
Indeed. You are open about your ambition to take over the world, rather than hiding behind the identity of an academic.
Notice: I am not Professor Quirrell in real life.
And that is exactly what Professor Quirrell would say!
Professor Quirrell wouldn't give himself away by writing about Professor Quirrell, even after taking into account that this is exactly what he wants you to think.
Cryo-wives: A promising comment from the NYT Article:
...As the spouse of someone who is planning on undergoing cryogenic preservation, I found this article to be relevant to my interests!
My first reactions when the topic of cryonics came up (early in our relationship) were shock, a bit of revulsion, and a lot of confusion. Like Peggy (I believe), I also felt a bit of disdain. The idea seemed icky, childish, outlandish, and self-aggrandizing. But I was deeply in love, and very interested in finding common ground with my then-boyfriend (now spouse). We talked, and talked, and argued, and talked some more, and then I went off and thought very hard about the whole thing.
Part of the strength of my negative response, I realized, had to do with the fact that my relationship with my own mortality was on shaky ground. I don't want to die. But I'm fairly certain I'm going to. Like many people, I've struggled to come to a place where I can accept the specter of my own death with some grace. Humbleness and acceptance in the face of death are valued very highly (albeit not always explicitly) in our culture. The companion, I think, to this humble acceptance of death is a humble (and painful) acce
That is really a beautiful comment.
It's a good point, and one I never would have thought of on my own: people find it painful to think they might have a chance to survive after they've struggled to give up hope.
One way to fight this is to reframe cryonics as similar to CPR: you'll still die eventually, but this is just a way of living a little longer. But people seem to find it emotionally different, perhaps because of the time delay, or the uncertainty.
Heard on #lesswrong:
If, for example, Wei Dai kicked Eliezer's ass at FAI theory, LW would not appear cultish
This suggests that we should try to make someone else a social authority so that he doesn't have to be.
(I hope posting only a log is ok)
I thought Less Wrong might be interested to see a documentary I made about cognitive bias. It was made as part of a college project and a lot of the resources that the film uses are pulled directly from Overcoming Bias and Less Wrong. The subject of what role film can play in communicating the ideas of Less Wrong is one that I have heard brought up, but not discussed at length. Despite the film's student-quality shortcomings, hopefully this documentary can start a more thorough dialogue that I would love to be a part of.
The link to the video is Here: http://www.youtube.com/watch?v=FOYEJF7nmpE
Geoff Greer published a post on how he got convinced to sign up for cryonics: Insert Frozen Food Joke Here.
Are any LWer's familiar with adversarial publishing? The basic idea is that two researchers who disagree on some empirically testable proposition come together with an arbiter to design an experiment to resolve their disagreement.
Here's a summary of the process from an article (pdf) I recently read (where Daniel Kahneman was one of the adversaries).
Since I assume he doesn't want to have existential risk increase, a credible threat is all that's necessary.
Perhaps you weren't aware, but Eliezer has stated that it's rational to not respond to threats of blackmail. See this comment.
(EDIT: I deleted the rest of this comment since it's redundant given what you've written elsewhere in this thread.)
This is true, and yes wfg did imply the threat.
(Now, analyzing not advocating and after upvoting the parent...)
I'll note that wfg was speculating about going ahead and doing it. After he did it (and given that EY doesn't respond to threats speculative:wga should act now based on the Roko incident) it isn't threat. It is then just a historical sequence of events. It wouldn't even be a particularly unique sequence events.
Wfg is far from the only person who responded by punishing SIAI in a way EY would expect to increase existential risk. ie. Not donating to SIAI when they otherwise would have.or by updating their p(EY(SIAI) is a(re) crackpot(s)) and sharing that knowledge. The description RationalWiki would be an example.
Suppose I were to threaten to increase existential risk by 0.0001% unless SIAI agrees to program its FAI to give me twice the post-Singuarity resource allocation (or whatever the unit of caring will be) that I would otherwise receive. Can see why it might have a policy against responding to threats? If Eliezer does not agree with you that censorship increases existential risk, he might censor some future post just to prove the credibility of his precommitment.
If you really think censorship is bad even by Eliezer's values, I suggest withdrawing your threat and just try to convince him of that using rational arguments. I rather doubt that Eliezer has some sort of unfixable bug regarding censorship that has to be patched using such extreme measures. It's probably just that he got used to exercising strong moderation powers on SL4 (which never blew up like this, at least to my knowledge), and I'd guess that he has already updated on the new evidence and will be much more careful next time.
I'm not sure that blackmail is a good name to use when thinking about my commitment, as it has negative connotations and usually implies a non-public, selfish nature.
More importantly, you aren't threatening to publicize something embarrassing to Eliezer if he doesn't comply, so it's technically extortion.
There's a course "Street Fighting Mathematics" on MIT OCW, with an associated free Creative Commons textbook (PDF). It's about estimation tricks and heuristics that can be used when working with math problems. Despite the pop-sounding title, it appears to be written for people who are actually expected to be doing nontrivial math.
Might be relevant to the simple math of everything stuff.
From a recent newspaper story:
The odds that Joan Ginther would hit four Texas Lottery jackpots for a combined $21 million are astronomical. Mathematicians say the chances are as slim as 1 in 18 septillion — that's 18 and 24 zeros.
I haven't checked this calculation at all, but I'm confident that it's wrong, for the simple reason that it is far more likely that some "mathematician" gave them the wrong numbers than that any compactly describable event with odds of 1 in 18 septillion against it has actually been reported on, in writing, in the history of intelligent life on my Everett branch of Earth. Discuss?
Would people be interested in a description of someone with high-social skills failing in a social situation (getting kicked out of a house)? I can't guarantee an unbiased account, as I was a player. But I think it might be interesting, purely as an example where social situations and what should be done are not as simple as sometimes portrayed.
Would people be interested in a description of someone with high-social skills failing in a social situation (getting kicked out of a house)?
I'm not sure it's that relevant to rationality, but I think most humans (myself included!) are interested in hearing juicy gossip, especially if it features a popular trope such as "high status (but mildly disliked by the audience) person meets downfall".
How about this division of labor: you tell us the story and we come up with some explanation for how it relates to rationality, probably involving evolutionary psychology.
Heh, that makes Roko's scenario similar to the Missionary Paradox: if only those who know about God but don't believe go to hell, it's harmful to spread the idea of God. (As I understand it, this doesn't come up because most missionaries think you'll go to hell even if you don't know about the idea of God.)
But I don't think any God is supposed to follow a human CEV; most religions seem to think it's the other way around.
Daniel Dennett and Linda LaScola on Preachers who are not believers:
There are systemic features of contemporary Christianity that create an almost invisible class of non-believing clergy, ensnared in their ministries by a web of obligations, constraints, comforts, and community. ... The authors anticipate that the discussion generated on the Web (at On Faith, the Newsweek/Washington Post website on religion, link) and on other websites will facilitate a larger study that will enable the insights of this pilot study to be clarified, modified, and expanded.
Paul Graham on guarding your creative productivity:
I'd noticed startups got way less done when they started raising money, but it was not till we ourselves raised money that I understood why. The problem is not the actual time it takes to meet with investors. The problem is that once you start raising money, raising money becomes the top idea in your mind. That becomes what you think about when you take a shower in the morning. And that means other questions aren't. [...]
You can't directly control where your thoughts drift. If you're controlling them, they're not drifting. But you can control them indirectly, by controlling what situations you let yourself get into. That has been the lesson for me: be careful what you let become critical to you. Try to get yourself into situations where the most urgent problems are ones you want think about.
So my brother was watching Bullshit, and saw an exorcist claim that whenever a kid mentions having an invisible friend, they (the exorcist) tell the kid that the friend is a demon that needs exorcising.
Now, being a professional exorcist does not give a high prior for rationality.
But still, even given that background, that's a really uncritically stupid thing to say. And it occurred to me that in general, humans say some really uncritically stupid things to children.
I wonder if this uncriticality has anything to do with, well, not expecting to be criticized...
An akrasia fighting tool via Hacker News via Scientific American based on this paper. Read the Scientific American article for the short version. My super-short summary is that in self-talk asking "will I?" rather than telling yourself "I will" can be more effective at reaching success in goal-directed behavior. Looks like a useful tool to me.
What's the deal with programming, as a careeer? It seems like the lower levels at least should be readily accessible even to people of thoroughly average intelligence but I've read a lot that leads me to believe the average professional programmer is borderline incompetent.
E.g., Fizzbuzz. Apparently most people who come into an interview won't be able to do it. Now, I can't code or anything but computers do only and exactly what you tell them (assuming you're not dealing with a thicket of code so dense it has emergent properties) but here's what I'd tell t...
I have no numbers for this, but the idea is that after interviewing for a job, competent people get hired, while incompetent people do not. These incompetents then have to interview for other jobs, so they will be seen more often, and complained about a lot. So perhaps the perceived prevalence of incompetent programmers is a result of availability bias (?).
This theory does not explain why this problem occurs in programming but not in other fields. I don't even know whether that is true. Maybe the situation is the same elsewhere, and I am biased here because I am a programmer.
This article is pretty cool, since it describes someone running quality control on a hospital from an engineering perspective. He seems to have a good understanding of how stuff works, and it reads like something one might see on lesswrong.
Is there any philosophy worth reading?
As far as I can tell, a great deal of "philosophy" (basically the intellectuals' wastebasket taxon) consists of wordplay, apologetics, or outright nonsense. Consequently, for any given philosophical work, my prior strongly favors not reading it because the expected benefit won't outweigh the cost. It takes a great deal of evidence to tip the balance.
For example: I've heard vague rumors that GWF Hegel concludes that the Prussian State (under which, coincidentally, he lived) was the best form of human existence...
So my question is: What philosophical works and authors have you found especially valuable, for whatever reason?
You might find it more helpful to come at the matter from a topic-centric direction, instead of an author-centric direction. Are there topics that interest you, but which seem to be discussed mostly by philosophers? If so, which community of philosophers looks like it is exploring (or has explored) the most productive avenues for understanding that topic?
Remember that philosophers, like everyone else, lived before the idea of motivated cognition was fully developed; it was commonplace to have theories of epistemology which didn't lead you to be suspicious enough of your own conclusions. You may be holding them to too high a standard by pointing to some of their conclusions, when some of their intermediate ideas and methods are still of interest and value today.
However, you should be selective of who you read. Unless you're an academic philosopher, for instance, reading a modern synopsis of Kantian thought is vastly preferable to trying to read Kant yourself. For similar reasons, I've steered clear of Hegel's original texts.
Unfortunately for the present purpose, I myself went the long way (I went to a college with a strong Great Books core in several subjects), so I don't have a good digest to recommend. Anyone else have one?
More on the coming economic crisis for young people, and let me say, wow, just wow: the essay is a much more rigorous exposition of the things I talked about in my rant.
In particular, the author had similar problems to me in getting a mortgage, such as how I get told on one side, "you have a great credit score and qualify for a good rate!" and on another, "but you're not good enough for a loan". And he didn't even make the mistake of not getting a credit card early on!
Plus, he gives a lot of information from his personal experience.
Be ...
I have a question about prediction markets. I expect that it has a standard answer.
It seems like the existence of casinos presents a kind of problem for prediction markets. Casinos are a sort of prediction market where people go to try to cash out on their ability to predict which card will be drawn, or where the ball will land on a roulette wheel. They are enticed to bet when the casino sets the odds at certain levels. But casinos reliably make money, so people are reliably wrong when they try to make these predictions.
Casinos don't invalidate prediction markets, but casinos do seem to show that prediction markets will be predictably inefficient in some way. How is this fact dealt with in futarchy proposals?
I think you're overestimating your ability to see what exactly is wrong and how to fix it. Humans (westerners?) are biased towards thinking that improvements they propose would indeed make things better. This tendancy is particularly visible in politics, where it causes the most damage.
More generally, humans are probably biased towards thinking their own ideas are particularly good, hence the "not invented here" syndrome, etc. Outside of politics, the level of confidence rarely reaches the level of threatening death and destruction if one's ideas are not accepted.
Is there a bias, maybe called the 'compensation bias', that causes one to think that any person with many obvious positive traits or circumstances (really attractive, rich, intelligent, seemingly happy, et cetera) must have at least one huge compensating flaw or a tragic history or something? I looked through Wiki's list of cognitive biases and didn't see it, but I thought I'd heard of something like this. Maybe it's not a real bias?
If not, I'd be surprised. Whenever I talk to my non-rationalist friends about how amazing persons X Y or Z are, they invariab...
Day-to-day question:
I live in a ground floor apartment with a sunken entryway. Behind my fairly large apartment building is a small wooded area including a pond and a park. During the spring and summer, oftentimes (~1 per 2 weeks) a frog will hop down the entryway at night and hop around on the dusty concrete until dying of dehydration. I occasionally notice them in the morning as I'm leaving for work, and have taken various actions depending on my feelings at the time and the circumstances of the moment.
Questions of priority - and the relative intensity of suffering between members of different species - need to be distinguished from the question of whether other sentient beings have moral status at all. I guess that was what shocked me about Eliezer's bald assertion that frogs have no moral status. After all, humans may be less sentient than frogs compared to our posthuman successors. So it's unsettling to think that posthumans might give simple-minded humans the same level of moral consideration that Elizeer accords frogs.
-- David Pearce via Facebook
Are there any possible facts that would make you consider frogs objects of moral worth if you found out they were true?
(Edited for clarity.)
I'm surprised. Do you mean you wouldn't trade off a dust speck in your eye (in some post-singularity future where x-risk is settled one way or another) to avert the torture of a billion frogs, or of some noticeable portion of all frogs? If we plotted your attitudes to progressively more intelligent entities, where's the discontinuity or discontinuities?
I'm surprised by Eliezer's stance. At the very least, it seems the pain endured by the frogs is terrible, no? For just one reference on the subject, see, e.g., KL Machin, "Amphibian pain and analgesia," Journal of Zoo and Wildlife Medicine, 1999.
Rain, your dilemma reminds me of my own struggles regarding saving worms in the rain. While stepping on individual worms to put them out of their misery is arguably not the most efficient means to prevent worm suffering, as a practical matter, I think it's probably an activity worth doing, because it builds the psychological habit of exerting effort to break from one's routine of personal comfort and self-maintenance in order to reduce the pain of other creatures. It's easy to say, "Oh, that's not the most cost-effective use of my time," but it can become too easy to say that all the time to the extent that one never ends up doing anything. Once you start doing something to help, and get in the habit of expending some effort to reduce suffering, it may actually be easier psychologically to take the efficiency of your work to the next level. ("If saving worms is good, then working toward technology to help all kinds ...
...Mankind may be crooked timber, as Kant put it, uniquely susceptible to ignorance and misinformation, but it’s an article of faith that knowledge is the best remedy. If people are furnished with the facts, they will be clearer thinkers and better citizens. If they are ignorant, facts will enlighten them. If they are mistaken, facts will set them straight.
In the end, truth will out. Won’t it?
Maybe not. Recently, a few political scientists have begun to discover a human tendency deeply discouraging to anyone with faith in the power of info
The more recent analysis I've read says that people pretty much become suicide bombers for nationalist reasons, not religious reasons.
I suppose that "There should not be American bases on the sacred soil of Saudi Arabia" is a hybrid of the two, and so might be "I wanted to kill because Muslims were being hurt"-- it's a matter of group identity more than "Allah wants it".
I don't have specifics for the 9/11 bombers.
Thanks, I often commit that mistake. I just write without thinking much, not recognizing the potential importance the given issues might bear. I guess the reason is that I mainly write for urge of feedback and to alleviate mental load.
It's not really meant as an excuse but rather the exposure of how one can use the same arguments to support a different purpose while criticizing others for these arguments and not their differing purpose. And the bottom line is that there will have to be a tradeoff between protecting values and the violation of the same to g...
Reading Michael Vassar's comments on WrongBot's article (http://lesswrong.com/lw/2i6/forager_anthropology/2c7s?c=1&context=1#2c7s) made me feel that the current technique of learning how to write a LW post isn't very efficient (read lots of LW, write a post, wait for lots of comments, try to figure out how their issues could be resolved, write another post etc - it uses up lots of the writer's time and lot's of the commentors time).
I was wondering whether there might be a more focused way of doing this. Ie. A short term workshop, a few writers who hav...
Rationality applied to swimming
The author was a lousy swimmer for a long time, but got respect because he put in so much effort. Eventually he became a swim coach, and he quickly noticed that the bad swimmers looked the way he did, and the good swimmers looked very different, so he started teaching the bad swimmers to look like the good swimmers, and began becoming a better swimmer himself.
Later, he got into the physics of good swimming. For example, it's more important to minimize drag than to put out more effort.
I'm posting this partly because it's alway...
Thought without Language Discussion of adults who've grown up profoundly deaf without having been exposed to sign language or lip-reading.
Edited because I labeled the link as "Language without Thought-- this counts as an example of itself.
Two things of interest to Less Wrong:
First, there's an article about intelligence and religiosity. I don't have access to the papers in question right now, but the upshot is apparently that the correlation between intelligence (as measured by IQ and other tests) and irreligiosity can be explained with minimal emphasis on intelligence but rather on ability to process information and estimate your own knowledge base as well. They found for example that people who were overconfident about their knowledge level were much more likely to be religious. There may...
The selective attention test (YouTube video link) is quite well-known. If you haven't heard of it, watch it now.
Now try the sequel (another YouTube video).
Even when you're expecting the tbevyyn, you still miss other things. Attention doesn't help in noticing what you aren't looking for.
Has anyone been doing, or thinking of doing, a documentary (preferably feature-length and targeted at popular audiences) about existential risk? People seem to love things that tell them the world is about to end, whether it's worth believing or not (2012 prophecies, apocalyptic religion, etc., and on the more respectable side: climate change, and... anything else?), so it may be worthwhile to have a well-researched, rational, honest look at the things that are actually most likely to destroy us in the next century, while still being emotionally compelling...
Nobel Laureate Jean-Marie Lehn is a transhumanist.
...We are still apes and are fighting all around the world. We are in the prisons of dogmatism, fundamentalism and religion. Let me say that clearly. We must learn to be rational ... The pace at which science has progressed has been too fast for human behaviour to adapt to it. As I said we are still apes. A part of our brain is still a paleo-brain and many of reactions come from our fight or flight instinct. As long as this part of the brain can take over control the rational part of the brain (we will face
"Therefore, “Hostile Wife Phenomenon” is actually “Distant, socially dysfunctional Husband Syndrome” which manifests frequently among cryonics proponents. As a coping mechanism, they project (!) their disorder onto their wives and women in general to justify their continued obsessions and emotional failings."
Assorted hilarious anti-cryonics comments on the accelerating future thread
I'm not disputing your point vs cryonics, but 0.5 will only rarely be the best possible estimate for the probability of X. It's not possible to think about a statement about which literally nothing is known (in the sense of information potentially available to you). At the very least you either know how you became aware of X or that X suddenly came to your mind without any apparent reason. If you can understand X you will know how complex X is. If you don't you will at least know that and can guess at the complexity based on the information density you expect for such a statement and its length.
Example: If you hear someone whom you don't specifically suspect to have a reason to make it up say that Joachim Korchinsky will marry Abigail Medeiros on August 24 that statement probably should be assigned a probability quite a bit higher than 0.5 even if you don't know anything about the people involved. If you generate the same statement yourself by picking names and a date at random you probably should assign a probability very close to 0.
Basically it comes down to this: Most possible positive statements that carry more than one bit of information are false, but most methods of encountering statements are biased towards true statements.
Very interesting story about a project that involved massive elicitation of expert probabilities. Especially of interest to those with Bayes Nets/Decision analysis background. http://web.archive.org/web/20000709213303/www.lis.pitt.edu/~dsl/hailfinder/probms2.html
Machine learning is now being used to predict manhole explosions in New York. This is another example of how machine learning/specialized AI are becoming increasingly common place to the point where they are being used for very mundane tasks.
They could talk about it elsewhere.
My understanding is that waitingforgodel doesn't particularly want to discuss that topic, but thinks that it's important that LW's moderation policy be changed in the future for other reasons. In that case it appears to me the best way to go about it is to try to convince Eliezer using rational arguments.
A public commitment has been made.
Commitment to a particular moderation policy?
Eliezer has a bias toward secrecy.
I'm inclined to agree, but do you have an argument that he is biased (instead of us)?
...In my obse
Slashdot having an epic case of tribalism blinding their judgment? This poster tries to argue that, despite Intelligent Design proponents being horribly wrong, it is still appropriate for them to use the term "evolutionist" to refer to those they disagree with.
The reaction seems to be basically, "but they're wrong, why should they get to use that term?"
Huh?
Have any LWers traveled the US without a car/house/lot-of-money for a year or more? Is there anything an aspiring rationalist in particular should know on top of the more traditional advice? Did you learn much? Was there something else you wish you'd done instead? Any unexpected setbacks (e.g. ended up costing more than expected; no access to books; hard to meet worthwhile people; etc.)? Any unexpected benefits? Was it harder or easier than you had expected? Is it possible to be happy without a lot of social initiative? Did it help you develop social initi...
Ironically, your comment series is evidence that censorship partially succeeded in this case. Although existential risk could increase, that was not the primary reason for suppressing the idea in the post.
I've actually speculated as to whether Eliezer was going MoR:Quirrel on us. Given that aggressive censorship was obviously going to backfire a shrewd agent would not use such an approach if they wanted to actually achieve the superficially apparent goal. Whenever I see an intelligent, rational player do something that seems to be contrary to their interests I take a second look to see if I am understanding what their real motivations are. This is an absolutely vital skill when dealing with people in a corporate environment.
Could it be the case that Eliezer is passionate about wanting people to consider torture:AIs and so did whatever he could to make it seem important to people, even though it meant taking a PR hit in the process? I actually thought this question through for several minutes before feeling it was safe to dismiss the possibility.
http://www.damninteresting.com/this-place-is-not-a-place-of-honor
Note to reader: This thread is curiosity inducing, this is affecting your judgement. You might think you can compensate for this bias but you probably won't in actuality. Stop reading anyway. Trust me on this. Edit: Me, and Larks, and ocr-fork, AND ROKO and [some but not all others]
I say for now because those who know about this are going to keep looking at it and determine it safe/rebut it/make it moot. Maybe it will stay dangerous for a long time, I don't know, but there seems to be a dece...
...I don't post things like this because I think they're right, I post them because I think they are interesting. The geometry of TV signals and box springs causing cancer on the left sides of people's bodies in Western countries...that's a clever bit of hypothesizing, right or wrong.
In this case, an organization I know nothing about (Vetenskap och Folkbildning from Sweden) says that Olle Johansson, one of the researchers who came up with the box spring hypothesis, is a quack. In fact, he was "Misleader of the year" in 2004. What does this mean in
Who's right? Who knows. It's a fine opportunity to remain skeptical.
Bullshit. The 'skeptical' thing to do would be to take 30 seconds to think about the theory's physical plausibility before posting it on one's blog, not regurgitate the theory and cover one's ass with an I'm-so-balanced-look-there's-two-sides-to-the-issue fallacy.
TV-frequency EM radiation is non-ionizing, so how's it going to transfer enough energy to your cells to cause cancer? It could heat you up, or it could induce currents within your body. But however much heating it causes, the temperature increase caused by heat insulation from your mattress and cover is surely much greater, and I reckon you'd get stronger induced currents from your alarm clock/computer/ceiling light/bedside lamp or whatever other circuitry's switched on in your bedroom. (And wouldn't you get a weird arrhythmia kicking off before cancer anyway?)
(As long as I'm venting, it's at least a little silly for Kottke to say he's posting it because it's 'interesting' and not because it's 'right,' because surely it's only interesting because it might be right? Bleh.)
it's at least a little silly for Kottke to say he's posting it because it's 'interesting' and not because it's 'right'
Yup, that's the bit I thought made it appropriate for LW.
It reminded me of my speculations on "asymmetric intellectual warfare" - we are bombarded all day long with things that are "interesting" in one sense or another but should still be dismissed outright, if only because paying attention to all of them would leave us with nothing left over for worthwhile items.
But we can also note regularities in the patterns of which claims of this kind get raised to the level of serious consideration. I'm still perplexed by how seriously mainstream media takes claims of "electrosensitivity", but not totally surprised: there is something that seems "culturally appropriate" to the claims. The rate at which cell phones have spread through our culture has made "radio waves" more available as a potential source of worry, and has tended to legitimize a particular subset of all possible absurd claims.
If breast cancer and melanomas are more likely on the left side of the body at a level that's statistically significant, that's interesting even if the proposed explanation is nonsense.
(So this is just about the first real post I made here and I kinda have stage fright posting here, so if its horribly bad and uninteresting and so please tell me what I did wrong, ok? Also, I've been frying to figure out the spelling and grammar and failed, sorry about that.) (Disclaimer: This post is humorous, and not everything should be taken all to seriously! As someone (Boxo) reviewing it put it: "it's like a contest between 3^^^3 and common sense!")
1) My analysis of http://lesswrong.com/lw/kn/torture_vs_dust_specks/
Lets say 1 second of tort...
Sparked by my recent interested in PredictionBook.com, I went back to take a look at Wrong Tomorrow, a prediction registry for pundits - but it's down. And doesn't seem to have been active recently.
I've emailed the address listed on the original OB ANN for WT, but while I'm waiting on that, does anyone know what happened to it?
UDT/TDT understanding check: Of the 3 open problems Eliezer lists for TDT, the one UDT solves is counterfactual mugging. Is this correct? (A yes or no is all I'm looking for, but if the answer is no, an explanation of any length would be appreciated)
Something I wonder about just how is how many people on LW might have difficulties with the metaphors used.
An example: In http://lesswrong.com/lw/1e/raising_the_sanity_waterline/, I still haven't quite figured what a waterline is supposed to mean in that context, or what kind of associations the word has, and neither had someone else I asked about that.
Are there any Less Wrongers in the Grand Rapids area that might be interested in meeting up at some point?
This is my PGP public key. In the future, anything I write which seems especially important will be signed. This is more for signaling purposes than any fear of impersonation -- signing a post is a way to strongly signal its seriousness.
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.7 (Cygwin)
mQGiBExOb4IRBAClNdK7kU0hDjEnR9KC+ga8Atu6IJ5pS9rKzPUtV9HWaYiuYldv
VDrMIFiBY1R7LKzbEVD2hc5wHdCUoBKNfNVaGXkPDFFguJ2D1LRgy0omHaxM7AB4
woFmm4drftyWaFhO8ruYZ1qSm7aebPymqGZQv/dV8tSzx8guMh4V0ree3wCgzaVX
wQcQucSLnKI3VbiyZQMAQKcEAI9aJRQoY1WFWaGDsAzCKBHtJIEooc+3+/2S
... Given all the recent discussion of contrived infinite torture scenarios, I'm curious to hear if anyone has reconsidered their opinion of my post on Dangerous Thoughts. I am specifically not interested in discussing the details or plausibility of said scenarios.
Interesting thread on self-control over on reddit
http://www.reddit.com/r/cogsci/comments/ctliw/is_there_any_evidence_suggesting_that_practicing/
Do you like the LW wiki page (actually, pages) on Free Will? I just wrote a post to Scott Aaronson's blog, and the post assumed an understanding of the compatibilist notion of free will. I hoped to link to the LW wiki, but when I looked at it, I decided not to, because the page is unsuitable as a quick introduction.
EDIT: Come over, it is an interesting discussion of highly LW-relevant topics. I even managed to drop the "don't confuse the map with the territory"-bomb. As a bonus, you can watch the original topic of Scott's post: His diavlog with A...
John Hari - My Experiment With Smart Drugs (2008)
How does everyone here feel about these 'Smart Drugs'? They seem quire tempting to me, but are there candidates that have been in use long and considered safe?
I figure the open thread is as good as any for a personal advice request. It might be a rationality issue as well.
I have incredible difficulty believing that anybody likes me. Ever since I was old enough to be aware of my own awkwardness, I have the constant suspicion that all my "friends" secretly think poorly of me, and only tolerate me to be nice.
It occurred to me that this is a problem when a close friend actually said, outright, that he liked me -- and I happen to know that he never tells even white lies, as a personal scruple -- and I ...
An object lesson in how not to think about the future:
http://www.futuretimeline.net/
(from Pharyngula)
I just finished polishing off a top level post, but 5 new posts went up tonight - 3 of them substantial. So I ask, what should my strategy be? Should I just submit my post now because it doesn't really matter anyway? Or wait until the conversation dies down a bit so my post has a decent shot of being talked about? If I should wait, how long?
Either I misunderstand CEV, or the above statement re: the Abrahamic god following CEV is false.
Ask yourself, wouldn't you fly a plane into a tower if that was the only way to disable Skynet?
Yes. I would also drop a nuke on New York if it were the only way to prevent global nuclear war. These are both extremely unlikely scenarios.
It's very correct to be suspicious of claims that the stakes are that high, given that irrational memes have a habit of postulating such high stakes. However, assuming thereby that the stakes never could actually be that high, regardless of the evidence, is another way of shooting yourself in the foot.
Is it my imagination, or is "social construct" the sociologist version of "emergent phenomenon"?
Something weird is going on. Every time I check, virtually all my recent comments are being steadily modded up, but I'm slowly losing karma. So even if someone is on an anti-Silas karma rampage, they're doing it even faster than my comments are being upvoted.
Since this isn't happening on any recent thread that I can find, I'd like to know if there's something to this -- if I made a huge cluster of errors on thread a while ago. (I also know someone who might have motive, but I don't want to throw around accusations at this point.)
I tend to vote down a wide swath of your comments when I come across them in a thread such as this one or this one, attempting to punish you for being mean and wasting peoples' time. I'm a late reader, so you may not notice those comments being further downvoted; I guess I should post saying what I've done and why.
In the spirit of your desire for explanations, it is for the negative tone of your posts. You create this tone by the small additions you make that cause the text to sound more like verbal speech, specifically: emphasis, filler words, rhetorical questions, and the like. These techniques work significantly better when someone is able to gauge your body language and verbal tone of voice. In text, they turn your comments hostile.
That, and you repeat yourself. A lot.
So I was pondering doing a post on the etiology of sexual orientation (as a lead-in to how political/moral beliefs lead to factual ones, not vice versa).
I came across this article, which I found myself nodding along with, until I noticed the source...
Oops! Although they stress the voluntary nature of their interventions, NARTH is an organization devoted to zapping the fabulous out of gay people, using such brilliant methodology as slapping a rubber band against one's wrist every time one sees an attractive person with the wrong set of chromosomes. From the...
There is something that bother's me and I would like to know if it bothers anyone else. I call it "Argument by Silliness"
Consider this quote from the Allais Malaise post: "If satisfying your intuitions is more important to you than money, do whatever the heck you want. Drop the money over Niagara Falls. Blow it all on expensive champagne. Set fire to your hair. Whatever."
I find this to be a common end point when demonstrating what it means to be rational. Someone will advance a good argument that correctly computes/deduces how you...
Luke Muehlhauser just posted about Friendly AI and Desirism at his blog. It tends to have a more general audience than LW, comments posted there could help spread the word. Desirism and the Singularity
Desirism and the Singularity, in which one of my favourite atheist communities is inching towards singularitarian ideas.
Looks like Emotiv's BCI is making noticeable progress (from the Minsky demo)
http://www.ted.com/talks/tan_le_a_headset_that_reads_your_brainwaves.html
but still using bold guys :)
Do the various versions of the Efficient Market Hypothesis only apply to investment in existing businesses?
The discussions of possible market blind spots in clothing makes me wonder how close the markets are to efficient for new businesses.
I'm curious what peoples opinions are of Jeff Hawkins' book 'on intelligence', and specifically the idea that 'intelligence is about prediction'. I'm about halfway through and I'm not convinced, so I was wondering if anybody could point me to further proofs of this or something, cheers
I was examining some of the arguments for the existence of god that separate beings into contingent (exist in some worlds but not all) and necessary (exist in all worlds). And it occurred to me that if the multiverse is indeed true, and its branches are all possible worlds, then we are all necessary beings, along with the multiverse, a part of whose structure we are.
Am I retreating into madness? :D
When thinking about my own rationality I have to identify problems. This means that I write statements like "I wait to long with making decisions, see X,Y". Now I worry that by stating this as a fact I somehow anchor it more deeply in my mind, and make myself act more in accordance with that statement. Is there actually any evidence for that? And if so, how do I avoid this problem?
If an AI does what Roko suggested, it's not friendly. We don't know what, if anything, CEV will output, but I don't see any reason to think CEV would enact Roko's scenario.
What's current thought about how you'd tell that AI is becoming more imminent?
I'm inclined to think that AI can't happen before the natural language problem is solved.
I'm trying to think of conflicts between subsystems of the brain to see if there's anything more than a simple gerontocratic system of veto power (i.e. evolutionarily older parts of the brain override younger parts). Help?
I've got things like:
I think in dialogue. (More precisely, I think in dialogue about half the time, in more bland verbal thoughts a quarter of the time, and visually a quarter of the time, with lots of overlap. This also includes think-talking to myself when it seems internally that there are 2 people involved.)
Does anyone else find themselves thinking in dialogue often?
I think it probably has something to do with my narcissistic and often counterproductive obsession with other people's perceptions of me, but this hypothesis is the result of generalizing from one example. If ...
I am tentatively interpreting your remark about "not wanting to leave out those I have *plonked" as an indication that you might read comments by such individuals. Therefore, I'm am going to reply to this remark. I estimate a small probability (< 5%) that you will actually consider what I have to say in this comment, but I also estimate that explicitly stating that estimate increase the probability rendering the estimate possibly as high as 10%. I estimate a much higher chance that this remark will be of some benefit to readers here, especial...
If you take this incident to its extreme, the important question is what people are willing to do in future based on the argument "it could increase the chance of an AI going wrong..."?
That is not the argument that caused stuff to be deleted from Less Wrong! Nor is it true that leaving it visible would increase the chance of an AI going wrong. The only plausible scenario where information might be deleted on that basis is if someone posted designs or source code for an actual working AI, and in that case much more drastic action would be required.
A general question about decision theory:
Is it possible to assign a non-zero prior probability to statements like "my memory has been altered", "I am suffering from delusions", and "I live in a perfectly simulated matrix"?
Apologies if this has been answered elsewhere.
I am pretty new to LW, and have been looking for something and have been unable to find it.
What I am looking for is a discussion on when two entities are identical, and if they are identical, are they one entity or two?
The context for this is continuity of identity over time. Obviously an entity that has extra memories added is not identical to an entity without those memories, but if there is a transform that can be applied to the first entity (the transform of experience over time), then in one sense the second entity can be considered to be an olde...
I am pretty new to LW, and have been looking for something and have been unable to find it.
What I am looking for is a discussion on when two entities are identical, and if they are identical, are they one entity or two?
The context for this is continuity of identity over time. Obviously an entity that has extra memories added is not identical to an entity without those memories, but if there is a transform that can be applied to the first entity (the transform of experience over time), then in one sense the second entity can be considered to be an olde...
http://www.usatoday.com/news/offbeat/2010-07-13-lottery-winner-texas_N.htm?csp=obinsite
My prior for the probability of winning the lottery by fraud is high enough to settle the question: the woman discussed in the article is cheating.
Does anyone disagree with this?
This is a brief excerpt of a conversation I had (edited for brevity) where I laid out the basics of a generalized anti-supernaturalism principle. I had to share this because of a comment at the end that I found absolutely beautiful. It tickles all the logic circuits just right that it still makes me smile. It’s fractally brilliant, IMHO.
So you believe there is a universe where 2 + 2 = 4 or the law of noncontradiction does not obtain? Ok, you are free to believe that. But if you are wrong, I am sure that you can see that there is an or...
SiteMeter gives some statistics about number of visitors that LessWrong has, per hour/per day/per month, etc.
According to the SiteMeter FAQ, multiple views from the same IP address are considered to be the same "visit" only if the they're spaced by 30 minutes or less. It would be nice to know how many visitors LessWrong has over a given time interval, where two visits are counted to be the same if they come from the same IP address. Does anyone know how to collect this information?
00:18 BTW, I figured out why Eliezer looks like a cult leader to some people. It's because he has both social authority (he's a leader figure, solicits donations) and an epistemological authority (he's the top expert, and wrote the sequences which are considered canonical).
00:18 If, for example, Wei Dai kicked Eliezer's ass at FAI theory, LW would not appear cultish
00:18 This suggests that we should try to make someone else a social authority so that he doesn't have to be.
Said on #lesswrong: 00:18 BTW, I figured out why Eliezer looks like a cult leader to some people. It's because he has both social authority (he's a leader figure, solicits donations) and an epistemological authority (he's the top expert, and wrote the sequences which are considered canonical). 00:18 If, for example, Wei Dai kicked Eliezer's ass at FAI theory, LW would not appear cultish 00:18 This suggests that we should try to make someone else a social authority so that he doesn't have to be.
said on #lesswrong 00:18 BTW, I figured out why Eliezer looks like a cult leader to some people. It's because he has both social authority (he's a leader figure, solicits donations) and an epistemological authority (he's the top expert, and wrote the sequences which are considered canonical). 00:18 If, for example, Wei Dai kicked Eliezer's ass at FAI theory, LW would not appear cultish 00:18 This suggests that we should try to make someone else a social authority so that he doesn't have to be.
(I hope posting only a log is okay)
Yes, that is what you said you'd do. An 0.0001% existential risk is equal to 6700 murders, and that's what you said you'd do if you didn't get your way. The fact that you didn't understand what it meant doesn't make it acceptable, and when it was explained to you, you should've given an unambiguous retraction but you didn't. You are obviously bluffing, but if I had the slightest doubt about that, then I would call the police, who would track you down and verify that you were bluffing.
I would call the police, who would track you down and verify that you were bluffing.
And you'd probably be cited for wasting police time. This is the most ridiculous statement I've seen on here in a while.
but if I had the slightest doubt about that, then I would call the police, who would track you down and verify that you were bluffing.
You appear to be confused. Wfg didn't propose to murder 6700 people. You did mathematics by which you judge wfg to be doing something as morally bad as 6700 murders. That doesn't mean he is breaking the law or doing anything that would give you the power to use the police to exercise your will upon him.
I disapprove of the parent vehemently.
This seems like a highly suboptimal solution. It's an explicit attempt to remove Roko from the top contributors list... if you/we/EY feels that's a legitimate thing to do, well then we should just do it directly. And if it isn't a legitimate thing to do, then we shouldn't do it via gaming the karma system.
Since it is all available through the link in the parent to that Wiki (for 2 days) I now don't see any reason anymore not to post the originals:
Maybe we're now finally able to talk about the ridiculous fear associated with such fantasies. Yep, fantasies. Do I think this is possible? Yup, but if we started to worry about and censor everything ...
It is not about the moral system being incomprehensible but the acts of the FAI. Whenever something bad happens religious people excuse it with an argument based on "higher intention". This is the gist of what I wanted to highlight. The similarity between religious people and those true believers into the technological singularity and AI's. This is not to say it is the same. I'm not arguing about that. I'm saying that this might draw the same kind of people committing the same kind of atrocities. This is very dangerous.
If people don't like anything happening, i.e. don't understand it, it's claimed to be a means to an end that will ultimately benefit their extrapolated volition.
People are not going to claim this in public. But I know that there are people here on LW who are disposed to extensive violence if necessary.
To be clear, I do not doubt the possibilities talked about on LW. I'm not saying they are nonsense like the old religions. What I'm on about is that the ideas the SIAI is based on, while not being nonsense, are posed to draw the same fanatic fellowship and cause the same extreme decisions.
Ask yourself, wouldn't you fly a plane into a tower if that was the only way to disable Skynet? The difference between religion and the risk of uFAI makes it even more dangerous. This crowd is actually highly intelligent and their incentive based on more than fairy tales told by goatherders. And if dumb people are already able to commit large-scale atrocities based on such nonsense, what are a bunch of highly-intelligent and devoted geeks who see a tangible danger able and willing to do? More so as in this case the very same people who believe it are the ones who think they must act themselves because their God doesn't even exist yet.
Yes. I would also drop a nuke on New York if it were the only way to prevent global nuclear war. These are both extremely unlikely scenarios.
It's very correct to be suspicious of claims that the stakes are that high, given that irrational memes have a habit of postulating such high stakes. However, assuming thereby that the stakes never could actually be that high, regardless of the evidence, is another way of shooting yourself in the foot.