If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Vague thought: it is very bad when important scientists die (in the general sense, including mathematicians and cmputer scientists). I recently learned that von Neumann died at age 54 of cancer. I think it's no exaggeration to say that von Neumann was one of the most influential scientists in history and that keeping him alive even 10 years more would have been of incredible benefit to humankind.
Seems like a problem worth solving. Proposed solution: create an organization which periodically offers grants to the most influential / important scientists (or maybe just the most influential / important people period), only instead of money they get a team of personal assistants who take care of their health and various unimportant things in their lives (e.g. paperwork). This team would work to maximize the health and happiness of the scientist so that they can live longer and do more science. Thoughts?
Only tangentially related vague thought:
As I understand it, Stephen Hawking's words-per-minute in writing is excruciatingly slow, and as a result I recall seeing in a documentary that he has a graduate student whose job is to watch as he is writing and to complete his sentences/paragraphs, at which point Hawking says 'yes' or 'no'. I would think that over time this person would develop an extremely well-developed mental Hawking...
Emulators are slow due to being on different hardware than the device they are emulating. If you're also on inferior hardware to the device you're trying to emulate, it will be very slow.
That said, even a very slow Hawking emulator is a pretty cool thing to have.
It is unclear whether the intellectual output of eminent scientists is best increased by prolonging their lives through existing medical technology, rather than by increasing their productivity through time-management, sleep-optimization or other techniques. Maybe the goal of your proposed organization would be better achieved by paying someone like David Allen to teach the von Neumanns of today how to be more productive. (MIRI did something similar to this when it hired Kaj Sotala to watch Eliezer Yudkowsky as he worked on his book.)
Isn't that more a case of reversion to the mean, with the implication that it's more a random variable than anything else?
It could be anything. I know a mathematician who took advice from a very emphatic writer about not being perfectionistic about editing. This is not bad advice for commercial writers, though I don't think it necessarily applies to all of them. The problem is that being extremely picky is part of the mathematician's process for writing papers. IIRC, the result was two years without him finishing any papers.
Or there's the story about Erdos, who ran on low doses of amphetamines. A friend of his asked him to go a month without the amphetamine, and he did, but didn't get any math done during that month.
It's possible that the net effect of some sort of adviser could be good, whether for a particular scientist or for scientists in general, but it's not guaranteed.
From the same article:
A sufficiently advanced technology is indistinguishable from a rigged demonstration.
I was wondering to what extent you guys agree with the following theory:
All humans have at least two important algorithms left over from the tribal days: one which instantly evaluates the tribal status of those we come across, and another that constantly holds a tribal status value for ourselves (let's call it self-esteem). The human brain actually operates very differently at different self-esteem levels. Low-status individuals don't need to access the parts of the brain that contains the "be a tribal leader" code, so this part of the brain is closed off to everyone except those with high self-esteem. Meanwhile, those with low self-esteem are running off of an algorithm for low-status people that mostly says "Do what you're told". This is part of the reason why we can sense who is high status so easily - those who are high status are plainly executing the "do this if you're high-status" algorithms, and those who are low status aren't. This is also the reason why socially awkward people report experiencing rare "good nights" where they feel like they are completely confident and in control (their self-esteem was temporarily elevated, giving ...
Your "running different code" approach is nice... especially paired up with the notion of "how the algorithm feels from the inside", seems to explain lots of things. You can read books about what that code does, but the best you can get is some low quality software emulation... meanwhile, if you're running it, you don't even pay attention to that stuff as this is what you are.
Incidentally, if anybody is curious why I stopped doing the Politics threads, it's because it seemed like people were -looking- for political things to discuss, rather than discussing the political things they had -wanted- to discuss but couldn't. People were still creating discussion articles which were politically oriented, so it didn't even help isolate existing political discussion.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
I have come to adore this sentence. It feels like home. Or a television character's catchphrase.
99 life hacks around the house: http://siriuslymeg.tumblr.com/post/33738057928/99-life-hacks-to-make-your-life-easier
Some people seem to have a strong moral intuition about purity that informs many of their moral decisions, and others don't. One guess for where a purity meme might come from is that it strongly enforces behaviors that prevented disease at the time the meme was created (e.g. avoiding certain foods or STDs). This hypothesis predicts that purity memes would be strongest coming from areas and historical periods where it would be particularly easy to contract diseases, especially diseases that are contagious, and especially diseases that don't cause quick death but cause infertility. Is this in fact the case?
I have some thoughts about extending "humans aren't automatically strategic" to whole societies. I am just not sure how much of that is specific for the place where I live, and how much is universal.
Seems to me that many people believe that improvements happen magically, so you don't have to use any strategy to get them, and actually using a strategy would somehow make things worse -- it wouldn't be "natural", or something. Any data can be explained away using hindsight bias: If we have an example of a strategy bringing a positive change, we can always say that the change happened "naturally" and the strategy was superfluous. On the other hand, about a positive change not happening we can always say the problem wasn't lack of strategy, but that the change simply wasn't meant to happen, so any strategy would have failed, too.
Another argument against strategic changes is that sometimes people use a strategy and screw up. Or use a strategy to achieve an evil goal. (Did you notice it is usually the evil masterminds who use strategy to reach their goals? Or neurotic losers.) Just like trying to change yourself is "unnatural", trying to change the ...
The standard problem with using the Drake Equation and similar formulas to estimate how much of the Great Filter is in front of us and how much is behind us is the lack of good estimates for most terms. However, there are other issues also. The original version of the Drake Equation presupposes independence of variables but this may not be the case. For example, it may be that the same things that lead to a star having a lot of planets also contribute to making life more likely (say for example that the more metal rich a star is the more elements that life has a chance to form from or make complicated structures with). What are the most likely dependence issues to come up in this sort of context, or do we know so little now that this question is still essentially hopeless?
I started typing something, then realized it was based on someone's claim in a forum discussion and I hadn't bothered trying to verify it.
It turns out that the information was exaggerated in such a way that, had I not bothered verifying, I would have updated much more strongly in favor of the efficacy of an organization of which he was a member. I got suspicious when Google turned up nothing interesting, so I checked the web site of said organization, which included a link to a press release regarding the subject.
Based on this and other things I've read, I conclude that this organization tends to have poor epistemic rationality skills overall (I haven't tested large groups of members; I'm comparing the few individual samples I've seen to organization policies and strategies), but the reports that they publish aren't as biased as I would expect if this were hopelessly pervasive.
(On the off chance that said person reads this and suspects that he is the subject, remember that I almost did the exact same thing, and I'm not affiliated with said organization in any way. Is there LW discussion on the tendency to trust most everything people say?)
Last night I finished writing http://www.gwern.net/Google%20shutdowns
I'd appreciate any comments or fixes before I go around making a Discussion post and everything.
Request for advice:
I need to decide in the next two weeks which medical school to attend. My two top candidates are both state universities. The relevant factors to consider are cost (medical school is appallingly expensive), program quality (reputation and resources), and location/convenience.
Florida International University Cost: I have been offered a full tuition scholarship (worth about $125,000 over four years), but this does not cover $8,500/yr in "fees" and the cost of living in Miami is high. The FIU College of Medicine's estimated yearly...
Lately there seems to be an abundance of anecdotal and research evidence to refrain from masturbation/quit porn. I am not sure that the evidence is conclusive enough for me to believe the validity of the claims. The touted benefits are impressive, while the potential cons seem minimal. I would be interested in some counter arguments and if not too personal, I'd like to know the thoughts of those who have participated in quitting masturbation/porn.
I quit porn three weeks ago and attempted to quit masturbation but failed. Subjectively I notice that I'm paying more attention to the women around me (and also having better orgasms when I do masturbate). My main reason for doing this was not so much that I found the research convincing as that the fact that people were even thinking about porn in this particular way helped me reorient my attitude towards porn from "it's harmless" to "it's a superstimulus, it may be causing a hedonic treadmill, and I should be wary of it in the same way that I'm now wary of sugar." (There's also a second reason which is personal.)
I like sixes_and_sevens' hypothesis. Here's another one: a smallish number of people really do have a serious porn addiction and really do benefit substantially from quitting cold turkey, but they're atypical. (I don't think I fall into this category, but I still think this is an interesting experiment to run.)
General comment: I think many people on LW have an implicit standard for adopting potential self-improvements that is way too high. When you're asking for conclusive scientific evidence, you're asking for something in the neighborhood of a 90% ...
I think many people on LW have an implicit standard for adopting potential self-improvements that is way too high.
People on LW have a habit of treating posts as if LW were a peer-reviewed journal rather than a place to play with ideas.
On a slightly related note, vibrators like the Hitachi Magic Wand are probably a superstimulus for women analogous to porn for men. (of course, anyone can enjoy either type, but that is less common)
Also I agree with your general comment about self improvements, especially since it is hard to find techniques/habits that work for everyone.
Hypothesis: arbitrary long-term acts of self-control improve personal well-being, regardless of the benefits of the specific act.
There's a big difference between the physical act of masturbation, which is probably harmless and good for you in moderate amounts, and the mental act of watching porn, which seems to be what people are advocating refraining from.
Also, r/nofap is weirdly cult-like from what I've seen and probably not a good resource. For example, this is the highest upvoted post that's not a funny picture, and it seems to be making very, very exaggerated claims about the benefits of not jacking off: "If you actually stop jerking off, and I mean STOP - eliminate it as a possibilty from your life (as I and many others have) - your sex starved brain and testicles will literally lead you out into the world and between the legs of a female. It just HAPPENS. Try it, you numbskull. You'll see that I speak the truth."
Hello,
I am a young person who recently discovered Less Wrong, HP:MOR, Yudkowsky, and all of that. My whole life I've been taught reason and science but I'd never encountered people so dedicated to rationality.
I quite like much of what I've found. I'm delighted to have been exposed to this new way of thinking, but I'm not entirely sure how much to embrace it. I don't love everything I've read although some of it is indeed brilliant. I've always been taught to be skeptical, but as I discovered this site my elders warned me to be skeptical of skepticism as w...
I have been vocally anti-atheist here and elsewhere, though I was brought up as a "kitchen atheist" ("Obviously there is no God, the idea is just silly. But watch for that black cat crossing the road, it's bad luck"). My current view is Laplacian agnosticism ("I had no need of that hypothesis"). Going through the simulation arguments further convinced me that atheism is privileging one number (zero) out of infinitely many possible choices. It's not quite as silly as picking any particular anthropomorphization of the matrix lords, be it a talking bush, a man on a stick, a dude with a hammer, a universal spirit, or what have you, but still an unnecessarily strong belief.
If you are interested in anti-atheist arguments based on moral realism made by a current LWer, consider Unequally Yoked. It's as close to "intelligent, thoughtful, rational criticism" as I can think of.
There is an occasional thread here about how Mormonism or Islam is the one true religion, but the arguments for either are rarely rational.
Could you bring yourself to believe in one particular anthropomorphization, if you had good reason to (a vision? or something lesser? how much lesser?)
I find it unlikely, as I would probably attribute it to a brain glitch. I highly recommend looking at this rational approach to hypnosis by another LW contributor. It made me painfully aware how buggy the wetware our minds run on is, and how easy it is to make it fail if you know what you are doing. Thus my prior when seeing something apparently supernatural is to attribute it to known bugs, not to anything external.
With respect to those in particular, I can't think of any experience off-hand which would raise my confidence in any of them high enough to be worth considering, though that's not to say that such experiences don't exist or aren't possible... I just don't know what they are.
Huh. That's interesting. For at least the first two I can think of a few that would convince me, and for the third I suspect that a lack of being easily able to be convinced is connected more to my lack of knowledge about the religion in question. In the most obvious way for YHVH, if everyone everywhere started hearing a loud shofar blowing and then the dead rose, and then an extremely educated fellow claiming to be Elijah showed up and started answering every halachic question in ways that resolve all the apparent problems, I think I'd be paying close attention to the hypothesis.
Similar remarks apply for Jesus. They do seem to depend strongly on making much more blatant interventions in the world then the deities generally seem to (outside their holy texts).
Technically the shofar blowing thing should not be enough sensory evidence to convince you of the prior improbability of this being the God - probability of alien teenagers, etcetera - but since you weren't expecting that to happen and other people were, good rationalist procedure would be to listen very carefully what they had to say about how your priors might've been mistaken. It could still be alien teenagers but you really ought to give somebody a chance to explain to you about how it's not. On the other hand, we can't execute this sort of super-update until we actually see the evidence, so meanwhile the prior probability remains astronomically low.
The optimal situation for you is that you've heard intelligent, thoughtful, rational criticism but your position remains strong.
The optimal situation could also be hearing intelligent, thoughtful, rational criticism, learn from it and having a new 'strong position' incorporating the new information. (See: lightness).
I sometimes see refutations of pro-religious arguments on this site, but no refutations of good arguments.
What good arguments do you think LW hasn't talked about?
My point in posting this is simply to ask you—what, in your opinion, are the most legitimate criticisms of your own way of thinking?
Religion holds an important social and cultural role that the various attempts at rationalist ritual or culture haven't fully succeeded at filling yet.
Take seriously in what sense?
For instance, I spent about six years seriously studying up on religions and theology, because I figured that if there were any sort of supreme being concerned with the actions of humankind, that would be one of the most important facts I could possibly know. So in that sense, I take religion very seriously. But in the sense of believing that any religion has a non-negligible chance of accurately describing reality, I don't take it seriously at all, because I feel that the weight of evidence is overwhelmingly against that being the case.
What sense of "taking religion seriously" are you looking for examples of?
If your estimation of the likelihood of God is negligible, then it may as well be zero.
This doesn't follow. For example, if you recite to me a 17 million digit number, my estimate that it is a prime is about 1 in a million by the prime number theorem. But, if I then find out that the number was in fact 2^57,885,161 -1, my estimate for it being prime goes up by a lot. So one can assign very small probabilities to things and still update strongly on evidence.
There are as many atheists who have never heard a decent defense of religion as there are religious fundamentalists who have never bothered to think rationally.
This seems improbable, considering that there are vastly more religious people than atheists.
Even in the non-technical sense, he's still making a relevant counterpoint, because it's much, much harder for atheists to go without exposure to religious culture and arguments than for a religious person to go without exposure to atheist arguments or culture (insofar as such a thing can be said to exist.)
one that has a chance in a real debate.
good arguments don't in general have a chance in a real debate, because debates are not about reasoning. But that's a nitpick.
I've seen a lot of religious people claiming to have access to strong arguments for theism, but have never seen one myself.
As JoshuaZ asks, you must have a strong argument or you wouldn't think this line of discussion was worth anything. What is it?
I'm going to second JoshuaZ here. There's a lot of disagreement among theists about what the best arguments for theism are. I'd rather not try to represent any particular argument as the best one available for theism, because I can't think of anything that theists would universally agree on as a good argument, and I don't endorse any of the arguments myself.
I would say that most atheists are at least exposed to arguments that apologists of some standing, such as C.S. Lewis or William Lane Craig, actually use.
Have most atheists honestly put thought into what if there actually was a God?
Don't know. Most probably have something better to do. I have thought about what would happen if there was a God. If it turned out the the god of the religion I was brought up in was real then I would be destined to burn in hell for eternity. If version 1 of the same god (Yahweh) existed I'd probably also burn in hell for eternity but I'm a bit less certain about that because the first half of my Bible talked more about punishing people while alive (well, at the start of the stoning they are alive at least) than the threat of torment after death. If Alah is real... well, I'm guessing there is going to be more eternal pain involved since that is just another fork of the same counterfactual omnipotent psychopath. Maybe I'd have more luck with the religions from ancient India---so long as I can convince the gods that lesswrong Karma counts.
So yes, I've given some thought to what happens if God exists: I'd be screwed and God would still be a total dick of no moral worth.
Many won't even accept that there is a possibility, and I think this is just as dangerous as blind faith.
Assigning probability 0 or 1 ...
Do we want to be Bayesian about it? Of course we do. Let's imagine two universes. One formed spontaneously, one was created. Which is more likely to occur?
It isn't obvious that this is at all meaningful, and gets quickly into deep issues of anthropics and observer effects. But aside from that, there's some intuition here that you seem to be using that may not be shared. Moreover, it also has the weird issue that most forms of theism have a deity that is omnipotent and so should exist over all universes.
Note also that the difference isn't just spontaneity v. created. What does it mean for a universe to be created? And what does it mean to call that creating aspect a deity? One of the major problems with first cause arguments and similar notions is that even when one buys into them it is extremely difficult to jump from their to theism. Relevant SMBC.
What I mean by "deity" and "created" is that either there is a conscious, intelligent mind (I think we all agree what that means) organizing our world/universe/reality, or there isn't.
Ok. So in this context, why do you think that one universe is more likely than the other? It may help to state where "conscious" and "intelligent" and "mind" come into this argument.
And of course I'm not trying to sell you on my particular religion.
On the contrary, that shouldn't be an "of course". If you sincerely believe and think you have the evidence for a particular religion, you should present it. If you don't have that evidence, then you should adjust your beliefs.
Even if one thinks one is in a constructed universe, it in no way follows that the constructor is divine or has any other aspects one normally associates with a deity. For example, this universe could be the equivalent of a project for a 12 dimensional grad student in a wildly different universe (ok, that might be a bit much- it might just be by an 11 -dimensional bright undergrad).
...I'm just trying to point out that I think there's not any more inherent reas
But I don't have any "evidence" to share with you, especially if you are committed to explaining it away as you may not be but many people here are.
So this is a problem. In general, there are types of claims that don't easily have shared evidence (e.g. last night I had a dream that was really cool, but I forgot it almost as soon as I woke up, I love my girlfriend, when I was about 6 years old I got the idea of aliens who could only see invisible things but not visible things, etc.) But most claims, especially claims about what we expect of reality around us should depend on evidence that can be shared.
I'm young, and I myself am trying to find good, rational arguments in favor of God.
So this is already a serious mistake. One shouldn't try to find rational arguments in favor of one thing or another. One should find the best evidence for and against a claim, and then judge the claim based on that.
have never been presented with solid arguments in favor of religion. Maybe I'll manage to find some or write them myself, and maybe I'll decide that the population of Less Wrong is as closed-minded as I feared.
You may want to seriously consider that the arguments you a...
I would venture a guess that atheists who haven't put thought into the possibility of there being a god are significantly in the minority. Although there are some who dismiss the notion as an impossibility, or such a severe improbability as to be functionally the same thing, in my experience this is usually a conclusion rather than a premise, and it's not necessarily an indictment of a belief system that a conclusion be strongly held.
Some Christians say that "all things testify of Christ." Similarly, Avicenna was charged with heresy for espousing a philosophy which failed to affirm the self-evidence of Muslim doctrine. But cultures have not been known to adopt Christianity, Islam, or any other particular religion which has been developed elsewhere, independent of contact with carriers of that religion.
If cultures around the world adopted the same religion, independently of each other, that would be a very strong argument in favor of that religion, but this does not appear to occur.
Have most atheists honestly put thought into what if there actually was a God?
Many people here are grew up in religious settings. Eliezer for example comes from an Orthodox Jewish family. So yes, a fair number have given thought to this.
people honestly believe they've been personally contacted by God.
Curiously many different people believe that they've been contacted by God, but they disagree radically on what this contact means. Moreover, when they claim to have been contacted by God but have something that doesn't fit a standard paradigm, or when they claim to have been contacted by something other than God, we frequently diagnose them as schizophrenic. What's the simplest explanation for what is going on here?
It's awfully easy to say they're all nutcases, but it's still easy and a bit more fair to say that they're mostly nutcases but maybe some of them are correct. Maybe. I think it's best to give it a chance at least.
Openmindedness in these respects has always seemed to me highly selective -- how openminded are you to the concept that most thunderbolts may be mere electromagnetic phenomena but maybe some thunderbolts are thrown down by Thor? Do you give that possibility a chance? Should we?
Or is it only the words that current society treats seriously e.g. "God" and "Jesus", that we should keep an open mind about, and not the names that past societies treated seriously?
You're assuming that "no God" is the null hypothesis. Is there a good, rational reason for this? One could just as easily argue that you should be an atheist if and only if it's clear that atheism is correct. Without any empirical evidence either way, is it more likely that there is some sort of Deity or that there isn't?
IMO there's no such thing as a null hypothesis; epistemology doesn't work like that. The more coherent approach is bayesian inference, where we have a prior distribution and update that distribution on seeing evidence in a particular way.
If there were no empirical evidence either way, I'd lean towards there being an anthropomorphic god (I say this as a descriptive statement about the human prior, not normative).
The trouble is that once you start actually looking at evidence, nearly all anthropomorphic gods get eliminated very quickly, and in fact the whole anthropomorphism thing starts to look really questionable. The universe simply doesn't look like it's been touched by intelligence, and where it does, we can see that it was either us, or a stupid natural process that happens to optimize quite strongly (evolution).
So while "some sort of god"...
Because nearly all things that could exist, don't. When you're in a state where you have no evidence for an entity's existence, then odds are that it doesn't exist.
Suppose that instead of asking about God, we ask "does the planet Hoth, as portrayed in the Star Wars movies, exist?" Absent any evidence that there really is such a planet, the answer is "almost certainly not."
If we reverse this, and ask "Does the planet Hoth, as portrayed in the Star Wars movies, not exist?" the answer is "almost certainly."
It doesn't matter how you specify the question, the informational content of the default answer stays the same.
In general I don't think it's healthy to believe the opposing viewpoint literally has no case.
Do you think that young earth creationists have no substantial case? What about 9/11 truthers? Belief in astrology? Belief that cancer is a fungus(no I'm not making that one up)? What about anything you'll find here?
The problem is that some hypotheses are wrong, and will be wrong. There are always going to be a lot more wrong hypothesis than right ones. And in many of these cases, there are known cognitive biases which lead to the hypothesis type in question. It may help to again think about the difference between policy issues (shouldn't be one-sided), and factual questions (which once one understands most details, should be).
Well, you asked for the most legitimate criticisms of rejecting religious faith.
Religious faith is not a rational epistemology; we don't arrive at faith by analyzing evidence in an unbiased way.
I can make a pragmatic argument for embracing faith anyway, because rational epistemology isn't the only important thing in the world nor necessarily the most important (although it's what this community is about).
But if you further constrain the request to seeking legitimate arguments for treating religious faith (either in general, or that of one particular denomination) as a rational epistemology, then I can't help you. Analyzing observed evidence in an unbiased way simply doesn't support faith in YHWH as worshiped by 20th-century Jews (which is the religious faith I rejected in my youth), and I know of no legitimate epistemological criticism that would conclude that it does, nor of any other denomination that doesn't have the same difficulty.
Now, if you want to broaden your search to include not only counterarguments against rejecting religious faith of specific denominations, but also counterarguments against rejecting some more amorphous proto-religious belief like "there exis...
Okay. This may not be the kind of thing you had in mind, but the way I personally think about things:
is probably not focused enough on emotions. I'm not very good at dealing with emotions, either myself or other people's, and I imagine that someone who was better would have very different thoughts about how to deal with people both on the small scale (e.g. interpersonal relationships) and on the large scale (e.g. politics).
may overestimate the value of individuals (e.g. in their capacity to affect the world) relative to organizations.
The way this community thinks about things:
is biased too strongly in directions that Eliezer finds interesting, which I suppose is somewhat unavoidable but unfortunate in a few respects. For example, Eliezer doesn't seem to think that computational complexity is relevant to friendly AI and I think this is a strong claim.
is biased towards epistemic rationality when I think it should be more focused on instrumental rationality. This is a corollary of the first bullet point: most of the Sequences are about epistemic rationality.
is biased towards what I'll call "cool ideas," e.g. cryonics or the many-worlds interpretation of quan
I've found many intelligent atheists, and I'm sure that there are rational intellectuals out there who disagree with LW. But where are they?
If you mean rational intellectuals who are theists and disagree with LW I cannot help you. Finding those who disagree with LW on core issues is less difficult. Robin Hanson for example. For an intelligent individual well informed of LW culture who advocates theism you could perhaps consider Will Newsome. Although he has, shall we say, 'become more eccentric than he once was' so I'm not sure if that'll satisfy your interest.
Together with Vallinder, I'm working on a paper on wild animal suffering. We decided to poll some experts on animal perception about their views on the likelihood that various types of animals can suffer. It now occurs to me that it might be interesting to compare their responses with those of the LW community. So, if you'd like to participate, click on one of the links below. The survey consists of only five questions and completing it shouldn't take more than a minute.
Click here if your year of birth is an even number
Click here if your year of bir
I have a question about linking sequence posts in comment bodies! I used to think it was a nice, helpful thing to do, such as citing your sources and including a convenient reference. But then it struck me that it might come off as patronizing to people that are really familiar with the sequences. Oops. Any pointers for striking a good balance?
Linking old posts helps all of the new readers who are following the conversation; this is probably more important than any effects on the person you're directly responding to.
Always err on the side of littering your comment with extra links. IME, that's more practical and helpful, and I've never personally felt irked when reading posts or comments with lots of links to basic Sequence material.
In most cases, I've found that it actually helps remember the key points by seeing the page again, and helps most arguments flow more smoothly.
Anyone here have experience hiring people on sites like Mechanical Turk, oDesk, TaskRabbit, or Fiverr? What kind of stuff did you hire them to do, and how good were they at doing it? It seems like these services could be potentially quite valuable so I'd like to get an idea of what it's possible to do with them.
MIRI has hired an artist, an LW programmer, and probably some others on oDesk. One person I heard about pays $1/hr people on oDesk to just sit with him on Skype all day and keep him on task.
As a stereotypical twenty-something recent graduate I am lacking in any particular career direction. I've been considering taking various psychometric or career aptitude tests, but have found it difficult to find unbiased reports on their usefulness. Does anyone have any experience or evidence on the subject?
I have looked through this thread, bravely started by ibidem, and I have noticed what seems like a failure mode by all sides. A religious person does not just believe in God, s/he alieves in God, too, and logical arguments are rarely the best way to get through to the relevant alieving circuit in the brain. Oh, they work eventually, given enough persistence and cooperation, but only indirectly. If the alief remains unacknowledged, we tend to come up with logical counterarguments which are not "true rejections". As long as the alief is there, the...
Everyone is referring me to Absence of Evidence; I think that it's a weak argument in the first place
Do you think it's a weak argument in general, or just a weak argument with respect to religion in particular?
If the former, it would certainly help if you could explain that. If the latter, do you think that religion is a special case with respect to need for evidence, or are you simply arguing that there is evidence available to us? And if the last one, why not discuss that evidence?
Hardly anyone treats it as the only argument against religion, but for many people here it is a fully sufficient argument. You just need to apply the principle of parsimony (Occam's razor) correctly.
Now a very weak way of applying it is as follows "In the absence of evidence of a deity, a hypothesis of no god is simpler/more parsimonious than the hypothesis that there is a god. So there is no god". If that's what you think we're arguing, I can understand why you think it weak.
However, a much stronger formulation looks like this. "If there were a deity, we would reasonably expect the world to look very different from the way we find it. True, it is possible to hypothesize a deity who intervenes - and fails to intervene - in exactly the right way to create the world that we see, including the various religious beliefs within it. But such a hypothetical being involves so many ad hoc auxiliary hypotheses and wild excuses that it is highly unparsimonious. So we should not believe in such a being".
Here are some examples of the ad hoc hypotheses and excuses needed:
A god creates complex livings beings, but chooses to create them in precisely the one way (evolution by
EDIT: I am closing analysis on this poll now. Thanks to the 104 respondents.
This is a poll on a minor historical point which came up on #lesswrong
where we wondered how obscure some useless trivia was; please do not look up anything mentioned here - knowing the answers does not make you a better person, I'm just curious - and if you were reading that part of the chat, likewise please do not answer.
Do you know what a "holystone" is and is used for?
[pollid:462]
In this passage:
"Tu Mu relates a stratagem of Chu-ko Liang, who in 149 BC, w
I got a decent smartphone (SGS3) a few days ago and am looking for some good apps for LessWrong-related activities. I am particularly interested in recommendations for lifelogging apps but would look into any other type of recommendations. Also I've rooted the phone.
Would learning Latin confer status benefits?
I've recently gotten the idea in my head of taking a twelve-week course in introductory Latin, mostly for nerdy linguistic reasons. It occurs to me that learning an idiosyncratic dead language is archetypal signalling behaviour, and this fits in with my observations. The only people I know with any substantial knowledge of the language either come from privileged backgrounds and private education, or studied Classics at university (which also seems to correlate with a privileged background).
A lot of the bonding...
Would learning Latin confer status benefits?
Some, usually. But there is (almost) no chance that if status is your goal that learning latin is a sane approach for gaining it. Learn something social.
A monthly "Irrational Quotes" thread might be nice. My first pick would be:
Basically, Godel’s theorems prove the Doctrine of Original Sin, the need for the sacrament of penance, and that there is a future eternity.
Samuel Nigro, "Why Evolutionary Theories are Unbelievable."
Suppose I have several different points to make in response to a given comment. Do I write all of them in a single comment, or do I write each of them in a separate comment? There doesn't seem to be an universally accepted norm about this -- the former seems to be more common, but there's at least one regular here who customarily does the latter and I can't remember anyone complaining about that.
Advantages of writing separate comments:
Michael Chwe, a game theorist at UCLA, just wrote a book on Jane Austin. It combines game theory and social signaling, so it looks like it'll be on the LW interest spectrum:
...Austen’s clueless people focus on numbers, visual detail, decontextualized literal meaning, and social status. These traits are commonly shared by people on the autistic spectrum; thus Austen suggests an explanation for cluelessness based on individual personality traits. Another of Austen’s explanations for cluelessness is that not having to take another person’s perspective is a mar
To whoever implemented this:
Replies to downvoted comments are discouraged. Pay 5 Karma points to proceed anyway?
You win, sir or madam.
I had a small thought the other day. Average utilitarianism appeals to me most it the various utilitarianisms I have seen, but has the obvious drawback of allowing utility to be raised simply by destroying beings with less than average utility.
My thought was that maybe this could be solved by making the individual utility functions permanent in some sense, i. e. killing someone with low utility would still cause average utility to decrease if they would have wanted to live. This seems to match my intuitions on morality better than any other utilitarianism ...
I'd like some comments on the landing page of a website I am working on Experi-org. It is to do with experimenting with organisations.
I mainly want feedback on tone and clarity of purpose. I'll work on cleaning it up more (getting a friend who is a proof reader to give it the once over), once I have those nailed down.
There is an article on impending AI and its socioeconomic consequences in the current issue of Mother Jones.
Karl Smith's reaction sounds rather Hansonian, except he doesn't try to make it sound less dystopian.
Does anyone remember a post (possibly a comment) with a huge stack of links about animal research not transferring to humans?
Hi, my name is Jason, this is my first post. I have recently been reading about 2 subjects here, Calibration and Solomoff Induction; reading them together has given me the following question:
How well-calibrated would Solomonoff Induction be if it could actually be calculated?
That is to say, if one generated priors on a whole bunch of questions based on information complexity measured in bits - if you took all the hypotheses that were measured at 10% likely - would 10% of those actually turn out to be correct?
I don't immediately see why Solomonoff Inductio...
Has anyone here heard of Michael Marder and his "Plant Thinking" - there is this book being published by Columbia University which argues that plants need to be considered as subjects with ethical value, and as beings with "unique temporality, freedom, and material knowledge or wisdom." This is not satire. He is a research professor of philosophy at a European university.
http://www.amazon.ca/Plant-Thinking-A-Philosophy-Vegetal-Life/dp/0231161255 and here is a review http://ndpr.nd.edu/news/39002-plant-thinking-a-philosophy-of-vegetal-l...
In Gender Trouble (1990), Judith Butler
...
accommodates plants' constitutive subjectivity, drastically different from that of human beings, and describes their world from the hermeneutical perspective of vegetal ontology (i.e., from the standpoint of the plant itself)"
...
So, in addition to the "vegetal différance" and "plants' proto-writing" (112) associated with Derrida, we're told that plant thinking "bears a close resemblance to the 'thousand plateaus'" (84) of Deleuze and Guattari. At the same time, plant thinking is "formally reminiscent of Heidegger's conclusions apropos of Dasein" (95),
So it's that kind of book.
Just so everyone is clear: this is the kind of "philosophy" that, in the States or the UK, would be done only at unranked programs or in English departments.
The review literally name checks every figure of shitty continental philosophy.
Stanford University is offering a from-scratch introduction to physics, taught by Leonard Susskind.
This is a notification, not a review, since I've only listened to a few minutes of the first lecture, which is at least intriguing. I'm wondering where Susskind could go with the question of allowable laws of physics.
Has there been an atempt at a RATIONAL! Wizard of Oz? I spontaneously started writing one in dialog form, then realized I would need to scrap it and start over with actual planning if I wanted to keep going. I like this idea, but I'm not sure how motivated I am to go through with it; I'd rather read an existing such fic, if one exists.
There's an argument in the metaethics sequence, to the effect that there are no universally compelling moral arguments. This argument seems to be an important cashed thought (in don't mean that in any pejorative sense) in LW discussions of morality. This argument also seems to me to be faulty. Can anyone help me see what I'm missing?
The argument is from No Universally Compelling Arguments:
...Yesterday, I proposed that you should resist the temptation to generalize over all of mind design space. If we restrict ourselves to minds specifiable in a trillion bi
I don't see how your P1 is a statement over all minds, it looks more like a statement over most arguments.
Does anyone know the terms for the positions for and against in the following scenario?:
Let's assume you have a one in a million chance of winning the lottery. Despite the poor chance, you pay five dollars to enter, and you win a large sum of money. Was playing the lottery the right choice?
Well, I would call them "expected value" and "hindsight".
Hindsight says, "Because we got a good result, it's all good."
Expected value says, "We got lucky, and cannot expect to get lucky again."
And then it says. "We learned something about the random variables that led to that lottery draw. This doesn't generalize well."
I don't know if there are terms for the positions, but it seems pretty obvious that this is just a question of how you define "right choice". Not playing the lottery was the choice that seemed to maximize your utility given your knowledge at the time. Playing the lottery was the choice that actually maximized your utility. Which one you decide to call "right" is up to you. I think calling the former right is a little more useful because it describes how to actually make decisions, while the latter is only useful for looking back on decisions and evaluating them.
In decision theory, the "goodness" or "badness" of a decision is divorced from its actual outcome. Buying the lottery ticket was a bad decision regardless of whether you win.
However, don't forget that the value of money doesn't scale linearly with how much utility you assign to it. People tend to forget this. There is no rule that says you have to accept a certain $10 in exchange for a 10% chance at $100; on the contrary, it would be unusual to have a perfectly linear utility function in terms of money.
It's possible that your valuation of $5 is essentially 'nothing,' while your valuation of $1 million is 'extremely high.' If you'll permit me to construct a ridiculous scenario: let's say that you're guaranteed an income of $5 a day by the government, that you have no other way of obtaining steady income due to a disability, and that your living expenses are $4.99 per day. You will never be able to save $1 million; even if you save 1c per day and invest it as intelligently as possible, you will probably never accumulate $1 million. Let's further assume that you will be significantly happier if you could buy a particular house which costs exactly $1 million...
I have one particular project I'd like to work on, that seems like it should be horribly quick and easy--done and out the door in a week. I've tried starting it a number of times, and hit one of the most unpleasant Ugh Fields that squat in mindscape (blah, even essay-related Ugh Fields I broke through well enough to complete several college courses a few times).
I'm considering just paying a competent programmer to do it. I'd probably try finding someone on ODesk, if I/someone else doesn't get to it before then.
The project is a relatively simple image viewi...
About thinking like a Slytherin - never take things at face value. Don't answer the surface question, answer the query that motivated the question.
When is it appropriate to move a post to Main?
When is it appropriate to submit a post to main initially?
I initially thought I would really like this article on consiousness after death. I did not. The guy comes off as a complete crackpot, given my understanding of neurobiology. (Although I won't dispute his overall point, nor would many here, I think, that we continue to exist for a bit after we are legally dead.) I would appreciate anyone who is so motivated to look up some things on why a lot of the things he says are completely bogus. I replied to the person who sent me this article with a fairly superficial analysis, but if anyone knows of some solid stu...
Hello, I am a young person who recently discovered Less Wrong, HP:MOR, Yudkowsky, and all of that. My whole life I've been taught reason and science but I'd never encountered people so dedicated to rationality.
I quite like much of what I've found. I'm delighted to have been exposed to this new way of thinking, but I'm not entirely sure how much to embrace it. I don't love everything I've read although some of it is indeed brilliant. I've always been taught to be skeptical, but as I discovered this site my elders warned me to be skeptical of skepticis
... I notice that most of the innovation in game accessibility (specifically accessibility to the visually impaired) comes from sighted or formerly-sighted developers. I feel like this is a bad thing. I'm not sure why I feel this way, considering that the source of innovation is less important than that it happens. Maybe it's a sort of egalitarian instinct?
(To clarify, I mean innovation in indie games like those in the audiogames.net database. Mainstream console/PC games have so little innovation toward accessibility as to be negligible, so far as I can tell.)
Have you adjusted for (what I assume is) the fact that most game developers are sighted? In fact, have you checked whether there even exist any not-even-formerly-sighted game developers? It seems like that would be a tough row to hoe even by the standards of blind-from-birth life.
That aside, I'm really not seeing the problem here. You're going to complain about people being altruistic towards the visually impaired? Really confused about your thought process.
I see. I apologize; I missed this the first time you said it.
So, on your view, what does it mean to evaluate evidence reliably, if not that sufficiently reliable evaluations of given evidence will converge on the same confidence in given propositions? What does it mean for a methodology to be correct, if not that it leads a system that implements it to a given confidence in given propositions given evidence?
Or, to put it differently... well, let's back up a step. Why should anyone care about evaluating evidence reliably? Why not evaluate it unreliably instead, or not bother evaluating it at all?