If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

New to LessWrong?

New Comment
650 comments, sorted by Click to highlight new comments since: Today at 12:32 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The current top post on /r/HPMOR is a proposal that using babies to make horcruxes is a net ethical positive. You’d do well on LessWrong.com, Ilverin.

-EY

I agree but man does EY/MIRI need a better PR agent.

[-][anonymous]11y140

It is known.

(oh, oops, wrong fandom.)

Vague thought: it is very bad when important scientists die (in the general sense, including mathematicians and cmputer scientists). I recently learned that von Neumann died at age 54 of cancer. I think it's no exaggeration to say that von Neumann was one of the most influential scientists in history and that keeping him alive even 10 years more would have been of incredible benefit to humankind.

Seems like a problem worth solving. Proposed solution: create an organization which periodically offers grants to the most influential / important scientists (or maybe just the most influential / important people period), only instead of money they get a team of personal assistants who take care of their health and various unimportant things in their lives (e.g. paperwork). This team would work to maximize the health and happiness of the scientist so that they can live longer and do more science. Thoughts?

[-][anonymous]11y260

Only tangentially related vague thought:

As I understand it, Stephen Hawking's words-per-minute in writing is excruciatingly slow, and as a result I recall seeing in a documentary that he has a graduate student whose job is to watch as he is writing and to complete his sentences/paragraphs, at which point Hawking says 'yes' or 'no'. I would think that over time this person would develop an extremely well-developed mental Hawking...

Emulators are slow due to being on different hardware than the device they are emulating. If you're also on inferior hardware to the device you're trying to emulate, it will be very slow.

That said, even a very slow Hawking emulator is a pretty cool thing to have.

It is unclear whether the intellectual output of eminent scientists is best increased by prolonging their lives through existing medical technology, rather than by increasing their productivity through time-management, sleep-optimization or other techniques. Maybe the goal of your proposed organization would be better achieved by paying someone like David Allen to teach the von Neumanns of today how to be more productive. (MIRI did something similar to this when it hired Kaj Sotala to watch Eliezer Yudkowsky as he worked on his book.)

5asr11y
There is something comically presumptuous about this statement. Von Neumann had very unusual work habits (he liked noise and distraction). He was also phenomenally productive (how many branches of mathematics have YOU helped invent?) Given that he was (A) smarter and (B) more successful than any life coach you are likely to find, I would be surprised if this sort of coaching added value.
4Pablo11y
I deleted the remark about von Neumann while you were composing your reply, after a quick Google search revealed no support for it. (I seem to remember a quote by von Neumann himself where he lamented that his lack of focus had prevented him from being much more productive as a scientist, but this is a very vague memory and I'm now unwilling to rest any claims on it.) For what is worth, here are some relevant remarks on von Neumann's work habits by Herman Goldstine, which contradict my earlier (and now retracted) statement:
1asr11y
Ah. The thing I thought you had in mind is that he liked to work in a noisy distracting environment. (http://en.wikipedia.org/wiki/John_von_Neumann#Personal_life) Which wouldn't work for most people, but evidently did for him.
4NoSignalNoNoise11y
Anyone who has managed to become an eminent scientist is probably doing a pretty good job at things like time management. Since maintaining healthy habits is not a prerequisite for attaining eminence, that is more likely to be an area where they're lacking.
1Pablo11y
Perhaps the word "eminent" was inappropriate: I meant, more generally, people with the ability to produce extremely valuable intellectual work and who have to some degree already produced that kind of work. This description could apply to people who haven't attained eminence in the traditional sense, but have still demonstrated the required brilliance. Eliezer is, again, a good example: he says, I believe, that he does serious work for only a couple of hours per day (I'm not entirely sure about this, and I'm happy to be corrected), and is as such someone who could benefit from a productivity or time-management coach. Another example that comes to mind is Saul Kripke, who is widely regarded as one of the smartest philosophers alive and the author of one of the most influential philosophical works of the past century (Naming and Necessity), and yet has produced very little output in large part because of lack of discipline.
9shminux11y
"Most influential/important scientists" would likely tell this organization exactly where to go and how fast. They are usually not short on cash and can handle their own affairs. Or their partners/secretaries do that already. Some eccentric ones might not, but they are even more likely to reject this "help". I am also wondering whom you would name as top 5 or so "important scientists"?
7[anonymous]11y
My thoughts exactly. Most of the high-level mathematicians I know are loathe to off-load their travel arrangements onto the department travel agent, even though the process is more efficient.
5NancyLebovitz11y
This, about pursuing varied movement, might offer intrinsic motivation to a few.
4Qiaochu_Yuan11y
Maybe, but this isn't their comparative advantage. They could spend some time becoming an expert on health, but it makes much more sense to have a health expert take care of the health stuff. I expect there are enough trivial inconveniences along the way that even academics with the money don't do this, and that seems very bad. I see no particular reason that the partner of an influential scientist ought to be particularly knowledgeable about health. And do academics even have personal secretaries anymore? I haven't observed any such people in my limited experience in academia so far. Dunno. This is out of my domain.
5Zaine11y
If they have an administrative position, yes.
6RolfAndreassen11y
A more straightforward approach: Give a prize to every leading scientist who reach 70, 80, and 90 years of age. It is counter-intuitive, but it seems that monetary incentives do actually influence people's mortality. Source: I remember reading this somewhere, so it must be true.
3maia11y
Isn't there a known phenomenon where, for example, where Nobel prize winners get significantly less productive after they win their prizes? Is it really true that the marginal benefit of keeping old scientists alive longer would be that great?
[-][anonymous]11y160

Isn't that more a case of reversion to the mean, with the implication that it's more a random variable than anything else?

5Qiaochu_Yuan11y
Maybe. Feynman talks about scientists getting less productive once they move to the IAS. But 10 years of a less productive von Neumann still beats 10 years of a dead one, I think. (Edit: It's less clear whether 10 years of a productive von Neumann and then 10 years of a dead von Neumann beats 20 years of a less productive von Neumann, I guess.)
2John_Maxwell11y
It's an interesting coincidence that JvN had both eidetic memory and extraordinary powers of mental computation. Given Hans Bethe: "I have sometimes wondered whether a brain like von Neumann's does not indicate a species superior to that of man", does anyone think maybe von Neumann had some kind of unusual hardware-level brain mutation that simultaneously made him super smart and super-good at remembering things? (Any interesting implications for the basis of human intelligence differences and thus the intelligence explosion?) Or was it the combination of extreme memory powers and computational powers that allowed JvN to achieve such fame in the first place? Also, how hard would it be to harvest genetic material from von Neumann's grave and create a zombie von Neumann? Edit: wait, looks like he might have had some worrisome views on nukes. Though is that just hindsight bias on my part?
1FiftyTwo11y
This seems to be effectively what universities and research groups do. Providing administrative assistance, psychological support etc. to specialist researchers. (While they don't normally provide medical care themselves they often pay for health insurance.) What would your proposed organisation do that they don't?
2Qiaochu_Yuan11y
It would be aggressively personalized, e.g. I don't think even universities and research groups will just straight up do your taxes or plan your meals.
1jooyous11y
Would important scientists still do science at the same level of quality if all their stuff was aggressively personalized? I can think of a couple of mechanisms that might kick in. They might work harder because they feel like they have to match the help they're receiving in scientific output. But they might also take the assistance as a sign that they're great and valuable and start slacking off, like ... divas? Also, from what I've seen/read, I think Japanese culture has this type of system for elders/experts in various fields. Maybe it applies to scientists?
5NancyLebovitz11y
Another risk is that what the helpers think is good for the scientist actually interferes with the scientists' work.
5jooyous11y
Like if the scientists get their best thinking done while chopping carrots or something? I was about to write about how it might feel weird to have someone else do tasks that you're perfectly capable of doing. Or maybe scientists might feel used (objectified?) that society only values them for their output if there's assistants constantly yanking away any non-science and saying, "Sir, please get back to your work!" But then I realized that this could be overcome by having the scientists decide on exactly which chores need to be done. However, that leads to the overhead of explaining to someone how you want something done, which is sometimes more annoying than just doing it yourself.

It could be anything. I know a mathematician who took advice from a very emphatic writer about not being perfectionistic about editing. This is not bad advice for commercial writers, though I don't think it necessarily applies to all of them. The problem is that being extremely picky is part of the mathematician's process for writing papers. IIRC, the result was two years without him finishing any papers.

Or there's the story about Erdos, who ran on low doses of amphetamines. A friend of his asked him to go a month without the amphetamine, and he did, but didn't get any math done during that month.

It's possible that the net effect of some sort of adviser could be good, whether for a particular scientist or for scientists in general, but it's not guaranteed.

-1Yuyuko11y
Oh, but some of them are such excellent company! Feynmann was such a charming raconteur when he came to visit in 1989...
2Leonhart11y
I know I shouldn't encourage novelty roleplaying accounts; but Feynman visiting Gensokyo is now canon for me. (Now, with whom shall I ship him...)
1gwern11y
A certain kappa comes to mind.

From the same article:

A sufficiently advanced technology is indistinguishable from a rigged demonstration.

I was wondering to what extent you guys agree with the following theory:

All humans have at least two important algorithms left over from the tribal days: one which instantly evaluates the tribal status of those we come across, and another that constantly holds a tribal status value for ourselves (let's call it self-esteem). The human brain actually operates very differently at different self-esteem levels. Low-status individuals don't need to access the parts of the brain that contains the "be a tribal leader" code, so this part of the brain is closed off to everyone except those with high self-esteem. Meanwhile, those with low self-esteem are running off of an algorithm for low-status people that mostly says "Do what you're told". This is part of the reason why we can sense who is high status so easily - those who are high status are plainly executing the "do this if you're high-status" algorithms, and those who are low status aren't. This is also the reason why socially awkward people report experiencing rare "good nights" where they feel like they are completely confident and in control (their self-esteem was temporarily elevated, giving ... (read more)

Yep. As I understand it, this is part of standard PUA advice.

Your "running different code" approach is nice... especially paired up with the notion of "how the algorithm feels from the inside", seems to explain lots of things. You can read books about what that code does, but the best you can get is some low quality software emulation... meanwhile, if you're running it, you don't even pay attention to that stuff as this is what you are.

[-][anonymous]11y100

tldr: Fake it 'till you make it.

9A1987dM11y
Yes, IME that's very close to the truth. I think that's the “less strong version” of this comment that people were talking of. The Blueprint Decoded puts it as ‘when you [feel low-status], you don't give yourself permission to [do high-status stuff]’. (I also seem to recall phonetician John C. Wells claiming that it's not like working-class people don't know what upper-class people speak like, it's just that they don't want to speak like that because it'd sound too posh for them.)
9Unnamed11y
Related research: Mark Leary's sociometer theory and Amy Cuddy on power posing.
8Adele_L11y
I've had a similar idea that perceived self status was the primary difference between skill/comfort at public speaking. I think the theory might be a good first approximation, but that there is a lot more going on too.
7RomeoStevens11y
A possible reason rejection therapy has positive spillover effects. When, contra your expectations, people agree to all sorts of weird requests from you, it signals to you that you are high status.
6wedrifid11y
Note that the flip side is that (perception of personal) high status can make you stupid, for analogous reasons to the ones you give here.
6gwern11y
Have you considered looking into the psychology literature? http://lesswrong.com/lw/dtg/notes_on_the_psychology_of_power/
1gothgirl42066611y
Yeah, I plan on investigating to see how much support this theory has going for it sometime in the future, but obviously it's easier to sit around in your chair thinking and coming up with theories than it is to actually do research. d: The article you linked to looks like a great starting point though, thank you!
4DaFranker11y
Onwards to find a combination of electrical impulses or chemicals one can pump into the brain to keep it permanently in high-status mode!
2A1987dM11y
“Dutch courage”? :-)
4DaFranker11y
O.O Hell, that's actually a reasonable avenue of research! Clearly for some people, alcohol does something to their brain which flips that switch. Time to drag in a few hundred street drunkards for a clinical study!
5maia11y
Based on what I know about double-blind tests with alcohol... I'd guess what it does for most people is give them an excuse :) But hey, a placebo effect is still an effect.
3Manfred11y
Way simplified; people are not only complicated, but different from each other. If one stripped away the big claims and just left a correctly-sized claim about human brains, that would be better.
1lucidian11y
I think it's a grave mistake to equate self-esteem with social status. Self-esteem is an internal judgment of self-worth; social status is an external judgment of self-worth. By conflating the two, you surrender all control of your own self-worth to the vagaries of the slavering crowd. Someone can have high self-esteem without high social status, and vice versa. In fact, I might expect someone with a strong internal sense of self-worth to be less interested in seeking high social status markers (like a fancy car, important career, etc.). When I say "a strong internal sense of self-worth", I guess I mean self-esteem that does not come from comparing oneself with others. It's the difference between saying "I'm proud of myself because I coded this piece of software that works really well" and "I'm proud of myself because I'm a better programmer than Steve is." From what I can tell, the internal kind of self-worth comes from having values, and sticking to them. So if I value honesty, hard work, ability to cook, etc., then I can be proud of myself for being an honest hard-working person who knows how to cook, regardless of whether anyone else shares these traits. Also, I think internal self-worth comes from completing one's goals, or contributing something useful to the world, both of which explain why someone can be proud of coding a great piece of software. (Sometimes I wonder whether virtue ethicists have more internal motivation/internal self-worth, while consequentialists have more external motivation/external self-worth.) (It seems that people of my generation (I'm 23) have less internal self-worth than people have had in the past. If this is true, then I'm inclined to blame consumerist culture and the ubiquity of social media, but I dunno, maybe I'm just a proto-curmudgeon.) Anyway, your theory about there being a "high self-esteem algorithm" and a "low self-esteem algorithm" seems like a reasonable enough model. And the use of these algorithms may very well
1gothgirl42066611y
Yeah, I was using the term self-esteem in a specific sense to mean "the result of some primitive algorithm in the brain that attempts to compute your tribal status". I tried to find some alternative term to call the result of this algorithm to prevent this exact confusion, but everything I could come up with was awkward. Maybe "status meter"? I agree with you in that I think there's only a moderate correlation between the result of this algorithm and a person's self-worth as it's usually understood. I don't really agree with this, assuming that I'm right in reading you as saying "A low-status person can hack their brain into running off the high-status algorithm by developing a strong sense of self-worth." At least it's not true for me personally. To be completely honest, I think I'm very intelligent and creative, and I do spend a sizeable chunk of every day working on my major life goals, which I enjoy doing. But at the same time, I would definitely say I'm running off of a low-status algorithm in most of my interactions. And even self-esteem purely in social interactions doesn't really seem to help my "status meter". For example, when I lost my virginity, I thought that it would make talking to girls much easier in the future. But this didn't really happen at all. Yeah, now that I think about it, this seems like the weakest link in my argument. I imagine most people fluidly switch from low status to high status algorithms on a regular basis depending on who they're interacting with. But maybe there's also a sort of larger meter somewhere in the brain that maintains a more constant level and guides long-term behavior? I don't know. Thank you for your response, though - this is definitely the most interesting response I've gotten for this comment. :)

Incidentally, if anybody is curious why I stopped doing the Politics threads, it's because it seemed like people were -looking- for political things to discuss, rather than discussing the political things they had -wanted- to discuss but couldn't. People were still creating discussion articles which were politically oriented, so it didn't even help isolate existing political discussion.

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

I have come to adore this sentence. It feels like home. Or a television character's catchphrase.

9John_Maxwell11y
That's actually discussed in Thinking Fast and Slow... familiar things that are cognitively easy to process feel nice.

Anyone know why Jaan Tallin is an investor in this? I don't see anything on their site about a friendliness emphasis. Is he following Shane Legg's advice here? Is that also why Good Ventures are involved, or do they just want to make a profit?

0gyokuro11y
The recent xkcd supports that small hacks have a large time-saving potential.
2John_Maxwell11y
Sample size of one here, but I'm pretty sure I looked through all 99 a year ago or something and it was time wasted.

Some people seem to have a strong moral intuition about purity that informs many of their moral decisions, and others don't. One guess for where a purity meme might come from is that it strongly enforces behaviors that prevented disease at the time the meme was created (e.g. avoiding certain foods or STDs). This hypothesis predicts that purity memes would be strongest coming from areas and historical periods where it would be particularly easy to contract diseases, especially diseases that are contagious, and especially diseases that don't cause quick death but cause infertility. Is this in fact the case?

6fubarobfusco11y
A contrary hypothesis: Strong moral intuitions about purity do not carry significant useful knowledge about disease — and indeed can lead people to be resistant to accurate information about disease prevention. Rather, these intuitions stem from practices for maintaining group identity by refusing to share food, accommodations, or sexuality with members of rival groups. These are (memetically) selected-for because groups that do not maintain group identity cease to be groups. (This is not "group selection" — it's not that the members of these groups die out; it's that they blend in with others.) Thus, we should expect purity memes to be strongest among people whose groups feel economically or politically threatened by foreigners, by different ethnic groups (including the threat of assimilation) or the like — and possibly weakest among world travelers, members of mixed-race or interfaith families, international traders, career diplomats, foreign correspondents, and others who benefit from engaging with foreigners or different ethnic groups.
1A1987dM11y
How is it contrary? It seems mostly orthogonal to me: all four quadrants of (high pathogen threat, low pathogen threat) x (high foreigner threat, low foreigner threat) seem possible to me. Probably not exactly orthogonal, but it's not immediately obvious to me what the sign of the correlation coefficient would be.
3A1987dM11y
Not exactly the same question, but see here. (Short answer: yes.)

I have some thoughts about extending "humans aren't automatically strategic" to whole societies. I am just not sure how much of that is specific for the place where I live, and how much is universal.

Seems to me that many people believe that improvements happen magically, so you don't have to use any strategy to get them, and actually using a strategy would somehow make things worse -- it wouldn't be "natural", or something. Any data can be explained away using hindsight bias: If we have an example of a strategy bringing a positive change, we can always say that the change happened "naturally" and the strategy was superfluous. On the other hand, about a positive change not happening we can always say the problem wasn't lack of strategy, but that the change simply wasn't meant to happen, so any strategy would have failed, too.

Another argument against strategic changes is that sometimes people use a strategy and screw up. Or use a strategy to achieve an evil goal. (Did you notice it is usually the evil masterminds who use strategy to reach their goals? Or neurotic losers.) Just like trying to change yourself is "unnatural", trying to change the ... (read more)

0TheOtherDave11y
Given the abstracted tone you seem to be trying to go for here, you might consider modifying the examples in your fourth paragraph to point to more widely separated points in subculture-space, so as to reduce the chance that an uncharitable reader might interpret this as a defensive reaction to how some particular subculture is often treated.

The standard problem with using the Drake Equation and similar formulas to estimate how much of the Great Filter is in front of us and how much is behind us is the lack of good estimates for most terms. However, there are other issues also. The original version of the Drake Equation presupposes independence of variables but this may not be the case. For example, it may be that the same things that lead to a star having a lot of planets also contribute to making life more likely (say for example that the more metal rich a star is the more elements that life has a chance to form from or make complicated structures with). What are the most likely dependence issues to come up in this sort of context, or do we know so little now that this question is still essentially hopeless?

I started typing something, then realized it was based on someone's claim in a forum discussion and I hadn't bothered trying to verify it.

It turns out that the information was exaggerated in such a way that, had I not bothered verifying, I would have updated much more strongly in favor of the efficacy of an organization of which he was a member. I got suspicious when Google turned up nothing interesting, so I checked the web site of said organization, which included a link to a press release regarding the subject.

Based on this and other things I've read, I conclude that this organization tends to have poor epistemic rationality skills overall (I haven't tested large groups of members; I'm comparing the few individual samples I've seen to organization policies and strategies), but the reports that they publish aren't as biased as I would expect if this were hopelessly pervasive.

(On the off chance that said person reads this and suspects that he is the subject, remember that I almost did the exact same thing, and I'm not affiliated with said organization in any way. Is there LW discussion on the tendency to trust most everything people say?)

9Richard_Kennaway11y
This older post is relevant.
4Document11y
I initially misread this as saying you were impressed with his persuasive skill and strongly tempted to update on the organization's effectiveness based on that.
0CAE_Jones11y
Ah, sorry. Clarity is far from my strongest skill. Any recommendations on how I might improve that would be very welcome.
0Document11y
In this case it was my fault for not reading closely; I was really commenting on the irony that the reverse of what you said was an equally if not more plausible LW comment.

Last night I finished writing http://www.gwern.net/Google%20shutdowns

I'd appreciate any comments or fixes before I go around making a Discussion post and everything.

0Douglas_Knight11y
Google kinda, sorta, lets you search the past, under "search tools." I think it filters pages by date of creation, but searches current text, so that "recent posts" type side bars pollute the results. And what is probably worse, it probably doesn't return dead pages.
0gwern11y
It doesn't return dead pages, and the date-filtering is highly error-prone, I've found: while using it in searching for launch and shutdown dates, there were many 'leaks from the future', we could call them. (Articles from 2007 lamenting the shutdown of Google Reader...)

Request for advice:

I need to decide in the next two weeks which medical school to attend. My two top candidates are both state universities. The relevant factors to consider are cost (medical school is appallingly expensive), program quality (reputation and resources), and location/convenience.

Florida International University Cost: I have been offered a full tuition scholarship (worth about $125,000 over four years), but this does not cover $8,500/yr in "fees" and the cost of living in Miami is high. The FIU College of Medicine's estimated yearly... (read more)

1Zaine11y
Although the FIU is new, its curriculum seems to fit the old Flexner I mold. I cannot tell the state of UF's program from the site. Research options at FIU appear limited, but if you have an interest in one among those available, this concern does not hold. What do you want to pursue in a medical career? Research? Patient Care? Whatever earns the most money? To find the necessary information if the answer is: * Research - Visit the school and investigate the status of its research department. Learn about ongoing studies, the attention ratios of the Principal Investigator to Junior Investigator to students, and the amount of freedom allowed in pursuing research interests. * Patient Care - Ask existing students of all years what their curriculum has been, and how much time they have spent with patients. Flexner I involves two years of study, then two years of practical application; Flexner II (an informal moniker) isn't a set system as individual schools are slowly implementing and trying new and different things, but generally differs from Flexner I - for example, involving patient care as part of the first two years. * Money - There are many avenues to approach this. Naturally the more prestige your school has the better, as that will help determine the quality of your first post; however, with enough research publications you can make your own prestige, and research will always be a value marker. Your alma mater on the other hand matters less and less as time passes and jobs accumulate.
0ITakeBets11y
I plan on a career in patient care. I will almost certainly do research in medical school, but based on past experience I don't expect to find it extremely compelling or to be extraordinarily good at it. Money concerns me if only for philanthropic purposes. The field that interests me most now (infectious disease) does not pay especially well, but I have decided that I really should seriously consider more lucrative paths that might let me donate enough to save twice as many lives in the developing world. Both schools seem to have pretty solid clinical training and early patient exposure, to hear the students tell it (though they have little basis for comparison). I don't have a strong preference between their curricula, except my worries about driving around between hospitals in Miami.
0Zaine11y
To me it then appears you have two (clear) paths in line with your preferences. Your emotional preference, what makes you happy, sounds like helping people in person (fuzzies). Your intellectual preference, goal, or ambition, could be paraphrased as, "Benefit to the highest possible positive degree the greatest number of people." Your ideal profession will meet somewhere between the optimal courses for each of these two preferences. I list these to avoid misunderstanding. The first course is the one you're pursuing - get an MD, work with patients to be happy, and donate to efficient high-utility charities in order to live with yourself. If the difference in cost will really only come out to 30-60k $US, you will be able to live with your husband while attending UF, UF is more prestigious, would cause you less worry, and if matriculating to UF makes you happier - then by all means attend UF! I'd be quite certain about the numbers, though. The second course isn't unique to medical professionals, but they do have special skills which can be of unique use. Go to a developing country and solve medical problems in highly replicable and efficient manners. This course probably meets your two preferences with the least amount of compromise. If you're unfamiliar with Paul Farmer, he went (still goes, maybe) to Haiti and tried to solve their medical problems - he had some success, but unfortunately the biggest problem with Haiti was governmental infrastructure, without which impact cannot be sustained. The second course would involve you using medical expertise to solve medical problems, and acquiring either additional knowledge or a partner with knowledge of how to establish infrastructure sufficient to sustain your solution. The final step involves writing Project Evaluations on your endeavours so that others can replicate them in wide and varied locales - this is how you make an impact. Not knowing anything about your husband, the above reasoning assumes he doesn't have
0ITakeBets11y
Thanks, your advice more or less coincides with what I was planning up until Ohio State confused me again. I certainly have not ruled out international medicine and nonprofit work as some part of my career, but I don't see that any of the schools that has accepted me has a clear advantage on that front.
0Zaine11y
Perhaps one of the schools has someone on the faculty with experience in that area, and could mentor you. If I may inquire, how did Ohio State confuse you?
0ITakeBets11y
On Wednesday they awarded me a scholarship covering full in-state tuition, making them probably my least expensive option (since it's easy to establish residency for tuition purposes in Ohio after a year or two). It's an excellent program, but moving would be hard and Columbus is cold and far from both our families.
1John_Maxwell11y
Do competitive fields tend to be the highest-paying? I would have assumed that the fields where there were more people going in to them than spots available had relatively low pay due to supply and demand, and the highest pay was to be found by going in to a field that was somehow difficult, boring, or distasteful in a way that discouraged people from entering it.
0ITakeBets11y
Fair question. It seems that compensation is determined largely by what Medicare/insurance companies are willing to pay for procedures etc. I believe unfilled fellowship spots aren't really a problem in any field, but the highest-paying subspecialties attract the most applicants. For example, cardiologists are very well-compensated, and cardiology fellowships are among the most competitive.
1John_Maxwell11y
Interesting. Right now I'd be leaning towards UF if I were you, I think, since my intuition is that $30-60K isn't much debt relative to what physicians typically make. But have you thought about using instacalc.com or some other spreadsheet to actually tally up all the numbers related to fees, cost of living, expected career earnings, time value of money/disconting, etc.? Congratulations on getting admitted to medical school, btw.
2ITakeBets11y
Thank you! I had just about settled on UF when I was suddenly struck with SERIOUS FIRST WORLD PROBLEMS as Ohio State, the highest-ranked school that accepted me, offered me a scholarship covering full in-state tuition. Ohio is quite easy to establish residency in, so I'd probably only be out of pocket the difference between in-state and out-of-state tuition for the first year, but of course I'd have to move, and we'd be far from both our families. I put together a spreadsheet taking into account the cost of moving, transportation costs, estimated change in rent, tuition and fees, and potential lost wages-- and it looks like OSU could actually be the least expensive of the three, depending on whether I manage to establish residency in time to get in-state tuition my second year (I'm told this is the norm). My estimate for the difference between UF and FIU increased slightly to $40k-$70k. I am not sure what to do about estimated career earnings-- lots of variance there, and I'm having a hard time weighing it against the costs, which I can be much more confident about.
0John_Maxwell11y
Congratulations on your first world problems! I don't have any brilliant ideas on estimating career earnings, sorry.
0Qiaochu_Yuan11y
Take a third option?
1ITakeBets11y
Do you have one in mind? Or are you just advising against medical school, and if so, why?
4Qiaochu_Yuan11y
I'm suggesting that you spend some time writing down what your third options are. Seems like a good thing to do in general. I don't know what your third options are or how they compare to medical school, so I can't say anything about that.
4ITakeBets11y
Ok, I agree that's probably good advice in general. I've tried to avoid premature closure throughout the process of making this career change, but I'll explicitly list some third options when I journal tonight. The bulk of my probability mass is in these two schools, though, so I am especially interested in advice that would help me choose between them.

Lately there seems to be an abundance of anecdotal and research evidence to refrain from masturbation/quit porn. I am not sure that the evidence is conclusive enough for me to believe the validity of the claims. The touted benefits are impressive, while the potential cons seem minimal. I would be interested in some counter arguments and if not too personal, I'd like to know the thoughts of those who have participated in quitting masturbation/porn.

I quit porn three weeks ago and attempted to quit masturbation but failed. Subjectively I notice that I'm paying more attention to the women around me (and also having better orgasms when I do masturbate). My main reason for doing this was not so much that I found the research convincing as that the fact that people were even thinking about porn in this particular way helped me reorient my attitude towards porn from "it's harmless" to "it's a superstimulus, it may be causing a hedonic treadmill, and I should be wary of it in the same way that I'm now wary of sugar." (There's also a second reason which is personal.)

I like sixes_and_sevens' hypothesis. Here's another one: a smallish number of people really do have a serious porn addiction and really do benefit substantially from quitting cold turkey, but they're atypical. (I don't think I fall into this category, but I still think this is an interesting experiment to run.)

General comment: I think many people on LW have an implicit standard for adopting potential self-improvements that is way too high. When you're asking for conclusive scientific evidence, you're asking for something in the neighborhood of a 90% ... (read more)

I think many people on LW have an implicit standard for adopting potential self-improvements that is way too high.

People on LW have a habit of treating posts as if LW were a peer-reviewed journal rather than a place to play with ideas.

On a slightly related note, vibrators like the Hitachi Magic Wand are probably a superstimulus for women analogous to porn for men. (of course, anyone can enjoy either type, but that is less common)

Also I agree with your general comment about self improvements, especially since it is hard to find techniques/habits that work for everyone.

5A1987dM11y
Yes, but make sure to count all the costs, incl. opportunity costs, in there.
7Qiaochu_Yuan11y
Agreed. But the opportunity cost of quitting porn, for example, is negative: it's actually saving me time.
4Kaj_Sotala11y
I thought that the opposite was true, in that LW regulars tended to be eager to try any suggested self-improvement idea that anybody had spent more than a few sentences offering anecdotal support for. Though that might just be overgeneralizing from my own habits.
2Qiaochu_Yuan11y
Hmm. My impression is that people here are very willing to try anti-akrasia ideas but not very willing to try other kinds of ideas. I could be mistaken though.
3Eugine_Nier11y
This is also my impression. People are willing to discuss anti-akrasia since Eliezer talked about it, but otherwise people have an unfortunate allief that any advise older than a couple decades is superstition.

Hypothesis: arbitrary long-term acts of self-control improve personal well-being, regardless of the benefits of the specific act.

6Alejandro111y
See also: Lent.
8MrMind11y
I cannot reach the site from where I am now, but try to look at The Last Psychiatrist blog, it has an article right about that. Its main point is that there's a problem that cause both porn addiction and difficulties with sexual relationships, so that they're not directly related. I have to say that my experience agrees with that: I haven't any particular problem with my sexuality, and quitting porn for a couple of months did not had any noticeable positive or negative effect.
4gwern11y
I don't either. The anecdotal evidence is the usual crap that you'll see for anything, and the research they cite is equivocal or only distantly related or worse (someone linked a blog post arguing for this on LW in the past and I pointed out that most of the points were awful and one study actually showed the opposite of what they thought it showed, although I can't seem to refind this comment right now).
3Viliam_Bur11y
On skeptics.stackexchange.com, the only answer on this topic is that masturbation is completely harmless.

There's a big difference between the physical act of masturbation, which is probably harmless and good for you in moderate amounts, and the mental act of watching porn, which seems to be what people are advocating refraining from.

Also, r/nofap is weirdly cult-like from what I've seen and probably not a good resource. For example, this is the highest upvoted post that's not a funny picture, and it seems to be making very, very exaggerated claims about the benefits of not jacking off: "If you actually stop jerking off, and I mean STOP - eliminate it as a possibilty from your life (as I and many others have) - your sex starved brain and testicles will literally lead you out into the world and between the legs of a female. It just HAPPENS. Try it, you numbskull. You'll see that I speak the truth."

6Viliam_Bur11y
Oh. I didn't notice the difference, because I automatically assumed those two acts to be connected. So, would that mean that masturbation without watching porn is healthy and harmless, but masturbation with watching porn is harmful? Sounds like an easy setup for a scientific experiment.
3NancyLebovitz11y
But what if you're imagining porn?
9Viliam_Bur11y
Uhhh... perhaps the best solution would be to masturbate while solving problems of algebra, just to make sure to avoid the sin of superstimulus. (Unless algebraic equations count as superstimulus too, in which case I am doomed completely.) This whole topic feels extremely suspicious to me. We have two crowds shouting their messages ("masturbation is completely safe and healthy, no bad side effects ever", "porn is a dopamine addiction to superstimulus and will destroy your mind"), both of them claim to have science on their side, and imagining the world where both are correct does not make much sense. To be honest, I suspect that both crowds are exaggerating and filtering the evidence. I also suspect that the actual reasons which created these crowds are something like this -- "Watching porn and masturbation is something that low-status males do, because high-status males get real sex. Let's criticize the low-status thing. Oh wait, women masturbate too; and we can't criticize that, because criticizing women would be sexist! Also, religion criticized masturbation, so we should actually promote it, just to show how open-minded we are. But porn is safe to criticize, because that's mostly a male thing. Therefore masturbation is perfectly okay, especially for a female, but porn is bad, and masturbation with porn is also bad. Other kinds of superstimuli, such as romantic stories for women, don't associate with low status, therefore we should ignore them in our debate about the dangers of superstimuli. Let's focus on criticizing the low-status things."
9NancyLebovitz11y
Romance novels are low status. They just aren't as low status as porn.
8gothgirl42066611y
I really don't understand how imagining "porn is a superstimulus because it allows you to instantly watch amazing sex that conforms to your personal taste. and therefore makes real sex seem less enjoyable" and "masturbation is not physically unhealthy, nor will it make real sex seem less enjoyable, and not walking around with blue balls all the time will make you a little happier, and 'practicing' for sex occasionally will make the act easier" leads to a world that doesn't make sense. I think it makes much more sense than your conspiracy theory against low-status males. And romantic stories for woman seem to obviously not be a superstimulus in the same way porn might be? (For one, outside the realm of porn, TV is fairly addictive and literature isn't.) There are diagnosed porn addicts whose addiction is ruining their lives, but I've never heard of any romantic novel addicts.
7Viliam_Bur11y
My reasoning is that if porn is seriously harmful and masturbation is absolutely harmless, there should be some aspect present at porn, but absent at masturbation and everyday life, which causes the harm. I have problem pointing out precisely what exactly that aspect would be. Too much conforming to my personal taste? That's already true for masturbation. Unlike at real sex, I can decide when, how often, for how long or short time, etc. But I am supposed to believe that none of this is a superstimulus, and it cannot make real sex less enjoyable even a bit. I am also supposed to believe that the similarities between masturbation and sex will help practising and make the act easier, but the differences are absolutely inconsequential. Seeing too many sexy ladies that I can't have sex with, some of them could be even more attractive than my partner? Well, I see sexy ladies when I walk down the street. And in the summer I will see even more. On the beach, still more. (I am not sure whether nudist beach is already beyond the limits, or not.) But I am supposed to believe that as long as I don't see their nipples or something, it is completely safe. But if I see a nipple, my brain will release the waves of dopamine and my mind will be ruined. (If I understand the definition of porn correctly, seeing a naked sexy lady on a picture is already porn, even if she is not doing anything with anyone, am I right? And even limiting oneself to that kind of porn would be already harmful.) All of that together? So if I see a sexy lady on the beach, and then I go home and masturbate thinking about her, that's completely harmless. However, if I make a picture of her, and then at home I look at the picture, especially if the picture was taken at the nudist beach, that is harmful; the mere looking is harmful, even if I don't touch myself. Sorry for exaggerations, but this is how those theories feel to me, when taken together. I can imagine making convincing arguments for each of them se
6gothgirl42066611y
Honestly, dude, you seem to be sort of engaging in black-and-white thinking that I wouldn't expect from a LW reader. Yes, a noncentral example of porn use such as "looking at a candid picture of a nude woman and not touching your dick" is almost definitely harmless. A much more central example of porn use, however, is a guy who has been jacking off to porn four times a week since he was about thirteen, and has in that time seen probably hundreds of porn videos, of which he has selected a few that appeal very specifically to his particular tastes, which he watches regularly. There's obviously no boundary where as soon as you do something labeled "watching porn" your brain will "release waves of dopamine and ruin your mind". But it doesn't seem hard to imagine that maybe that guy would be healthier if he changed his habits and started jacking off to his imagination (which he would probably end up doing much less frequently, I imagine), and "don't jack off to anything but your imagination" is a much, much more effective rule to precommit to than "stop watching porn if you get the feeling that you might be falling for a superstimulus", or whatever.
8Viliam_Bur11y
Ironically, I imagined myself as making fun of other people's black-and-white thinking. (Masturbation completely healthy and harmless: in the skeptics discussion I linked. Porn: superstimulus ruining one's mind and life.) I tried to find out how exactly the world would look like for people who believe both of these things; mostly because nobody here tried to contradict either of them. What would be the logical consequences of these beliefs -- because people are often not aware of logical consequences of the beliefs they already have. To me, both these beliefs feel like exaggerations, and they also feel contradictory, although technically they are not speaking about exactly the same thing. One kind of superstimulus is perfectly safe, other kind of superstimulus is addictive -- is this an inconsistent approach to superstimuli, or a claim that these superstimuli are of a different nature? I am thankful for two contributors willing to bite the bullet and describe what could the world look like if both beliefs were true. TheOtherDave said that actions controlled by one's own mind (masturbation) could have smaller effect than actions not controlled by one's own mind (watching a porn movie), just like it is difficult to tickle oneself. Qiaochu_Yuan said that some actions have natural limit where a human must stop (masturbation), while other actions have no such limit and can be prolonged indefinitely (watching porn), just like you can't eat the whole day, but you can play a computer game the whole day. -- Both of these answers make sense and I did not realize that. And that's essentially all I wanted from this topic. (Unless someone would give me a pointer to a scientific study concerned with differences between masturbation without porn and masturbation with porn, in terms of addiction and behavioral change.)
5Qiaochu_Yuan11y
You can continuously watch porn in the same way that you can continuously play World of Warcraft. You can't continuously masturbate in the same way that you can't continuously eat pizza. "Porn" is too vague. Are you talking about a quick 5-minute session or a marathon lasting several hours? If you've never done the latter, consider that some people might. The effects of the two are likely to be quite different, especially if the latter is a frequent occurrence. Also, it's not at all popular among my friend groups to slander porn. That's seen as sex-negative, which is one reason I never got around to thinking about porn as potentially harmful until quite recently.
5NancyLebovitz11y
It may be that masturbation has satiation much more than looking at pictures does.
5Jiro11y
Generally, when people claim something is harmless, they don't mean that it's "absolutely harmless". Playing videogames is harmful if you do it to the exclusion of eating, sleeping, and excreting, but one would not normally say that videogames are harmful based on them being harmful under such conditions. It is entirely possible to claim that porn is harmful, and that masturbation under similar circumstances (such as masturbating to mental images of people) is also harmful, while still consistently insisting that masturbation is harmless.
3A1987dM11y
I guess that according to such people the problem is not porn per se, but the addiction to porn. Looking at ladies on the beach and going home and masturbating once isn't problematic, but if you do that for 10% of your waking time for years... And ‘don't watch porn’ makes for a better Schelling point than ‘don't watch more than half an hour of porn a week’, for someone who's trying to quit.
3TheOtherDave11y
While I agree with your ultimate conclusion, it's not that implausible that synchronously controlled self-stimulation (which IME most masturbation is, though I suppose it depends on what you're into) is less stimulating than asychronously controlled self-stimulation (e.g., programming a pattern of changing frequencies on a vibrator, or downloading a bunch of porn and queuing a slideshow on my desktop, or visiting a series of previously selected websites with changing content), for many of the same reasons that I can't tickle myself effectively with my fingers but can easily be tickled by inanimate objects. If that turns out to be true, I would expect a not-very-rigorous analysis to conclude "masturbation is less stimulating than porn", since asynchronously controlled masturbation is relatively rare, as is synchronously controlled porn.
7OrphanWilde11y
Literature isn't addictive? I think I'm going to have to disagree with you there. (And TV isn't addictive for me, personally, at -all-.) Additionally, a Google search on "romance novel addiction" suggests there are such addicts.
6bogus11y
Really? I can imagine a world where plenty of things that might be considered addictive are quite safe and healthy, as long as you do them in moderation - and what counts as "moderation" may well be different among different people. E.g. some people might be highly sensitive to addiction, so that their only alternative is quitting the habit entirely.
0CAE_Jones11y
When you say "eliminate it as a possibility from your life", I get quite confused as to how this is managed. Take on as many roommates as possible and keep bathrooms on strict timers to minimize opportunities to do it in private? I've heard of people using sorts of cages to make doing it absurdly difficult and/or painful, but the one set of anecdotes I came across didn't make it sound particularly effective. It just sounds like you're saying it's within most people's abilities to make masturbation practically impossible, which I find a much more difficult claim to believe than the assertion about the results.
4gothgirl42066611y
First of all, I can't tell if you realize that it's not me saying it, it's a quote that I selected for its absurdity. But more importantly, I think he just means that the thought of jacking off no longer occurs to him or seems like something he has any reason to do, just like the idea of going to the store and buying cigarettes doesn't really seem like a possibility to non-smokers. I don't think he's talking about wearing a chastity belt or anything.
1CAE_Jones11y
Oh! I read that with a screen reader with punctuation turned off, so completely failed to notice that the last chunk of it was in quotes! Though I probably should have noticed something was up, if I'd compared it more carefully to the previous sentence, which makes two incredibly obvious posts I got completely wrong yesterday. :( Thanks for clarifying!
0A1987dM11y
Meh. I just use picoeconomics for that.
0hg0011y
This page has some links to studies and informed-seeming medical opinions on the effects of porn. [comment retraction test; looks like there's no way to reverse retraction]
0A1987dM11y
I've never watched porn on a regular basis for about a decade, so I won't comment about that. As for masturbation, IME there's an optimum: too much of it (more than a couple of times a week for me -- YMMV) seems to cause apathy and increase my need for sleep, but too little (less than once a week) makes it harder for me to focussedly think about anything other than women, and to fall asleep; and after ten days or so I can feel physical discomfort in my testicles (which takes hours to go away even after I eventually masturbate).
2Prismattic11y
Pretty sure there's quite a bit of variation in the optimum. I hit the "can't concentrate on anything else" point at between 48 and 72 hours, and I don't experience either apathy or greater sleep need.
0A1987dM11y
Yeah, the optimum used to be shorter for me, too. It's like I get habituated to [whatever happens when I don't masturbate for a while] so that I need more to get the same positive effects, the way I do with (say) caffeine.

People who are currently in jobs you like, how did you get them?

3fubarobfusco11y
My partner (who actually had a résumé posted online, whereas I did not) got calls from two recruiters for the same company; and redirected one of them to me. We wanted to relocate to a warmer climate; we both interviewed and got offers. In other words, I had sufficient skill ... but also I got lucky big-time. (A harder question is whether I actually like my job. I've been doing it for 7+ years, but I'm also actively looking for alternatives.)
0Viliam_Bur11y
Imagine that after next 7 years of looking for alternatives, it will still seem that your current job is the best choice for you. Did this sentence make you feel happy or sad?
2ModusPonies11y
Comically large amounts of networking. The connection that landed me a programming job was my mom's dance instructor's husband.
[-][anonymous]11y80

Hello,

I am a young person who recently discovered Less Wrong, HP:MOR, Yudkowsky, and all of that. My whole life I've been taught reason and science but I'd never encountered people so dedicated to rationality.

I quite like much of what I've found. I'm delighted to have been exposed to this new way of thinking, but I'm not entirely sure how much to embrace it. I don't love everything I've read although some of it is indeed brilliant. I've always been taught to be skeptical, but as I discovered this site my elders warned me to be skeptical of skepticism as w... (read more)

I have been vocally anti-atheist here and elsewhere, though I was brought up as a "kitchen atheist" ("Obviously there is no God, the idea is just silly. But watch for that black cat crossing the road, it's bad luck"). My current view is Laplacian agnosticism ("I had no need of that hypothesis"). Going through the simulation arguments further convinced me that atheism is privileging one number (zero) out of infinitely many possible choices. It's not quite as silly as picking any particular anthropomorphization of the matrix lords, be it a talking bush, a man on a stick, a dude with a hammer, a universal spirit, or what have you, but still an unnecessarily strong belief.

If you are interested in anti-atheist arguments based on moral realism made by a current LWer, consider Unequally Yoked. It's as close to "intelligent, thoughtful, rational criticism" as I can think of.

There is an occasional thread here about how Mormonism or Islam is the one true religion, but the arguments for either are rarely rational.

-1[anonymous]11y
That's a really good way of looking at things, thanks. From now on I'm an "anti-atheist" if nothing else...and I'll take a look at that blog. Could you bring yourself to believe in one particular anthropomorphization, if you had good reason to (a vision? or something lesser? how much lesser?)

Could you bring yourself to believe in one particular anthropomorphization, if you had good reason to (a vision? or something lesser? how much lesser?)

I find it unlikely, as I would probably attribute it to a brain glitch. I highly recommend looking at this rational approach to hypnosis by another LW contributor. It made me painfully aware how buggy the wetware our minds run on is, and how easy it is to make it fail if you know what you are doing. Thus my prior when seeing something apparently supernatural is to attribute it to known bugs, not to anything external.

-1[anonymous]11y
The brain glitch is always available as a backup explanation, and they certainly do happen (especially in schizophrenics etc.) But if I had an angel come down to talk to me, I would probably believe it.

How would you tell the difference? Also see this classic by another LWer.

Personally, I think this one is more relevant. The biggest problem with the argument from visions and miracles, barring some much more complicated discussions of neurology than are really necessary, is that it proves too much, namely multiple contradictory religions.

3shminux11y
It's a good post, but overly logical and technically involved for a non-LWer. Even if you agree with the logic, I can hardly imagine a religious person alieving that their favorite doctrine proves too much.
2[anonymous]11y
It's a very interesting post. You're right that we can't accept all visions, because they will contradict each other, but in fact I think that many don't. It's entirely plausible in my mind that God really did appear to Mohammed as well as Joseph Smith, for instance, and they don't have to invalidate each other. But of course if you take every single claim that's ever been made, it becomes ridiculous. Does it prove too much, then, to say that some visions are real and some are mental glitches? I'm not suggesting any way of actually telling the difference.
5Desrtopa11y
Well, it's certainly not a very parsimonious explanation. This conversation has branched in a lot of places, so I'm not sure where that comment is right now, but as someone else has already pointed out, what about the explanation that most lightning bolts are merely electromagnetic events, but some are thrown by Thor? Proposing a second mechanism which accounts for some cases of a phenomenon, when the first mechanism accounts for others, is more complex (and thus in the absence of evidence less likely to be correct) than the supposition that the first mechanism accounts for all cases of the phenomenon. If there's no way to tell them apart, then observations of miracles and visions don't count as evidence favoring the explanation of visions-plus-brain-glitches over the explanation of brain glitches alone. It's possible, but that doesn't mean we have any reason to suppose it's true. And when we have no reason to suppose something is true, it generally isn't.
8TheOtherDave11y
FWIW, I've had the experience of a Presence manifesting itself to talk to me. The most likely explanation of that experience is a brain glitch. I'm not sure why I ought to consider that a "backup" explanation.
0[anonymous]11y
Right, obviously it's a problem. There are lots of people who think they've been manifested to, and some of them are schizophrenic, and some of them are not, and it's a whole lot easier to just assume they're all deluded (even if not lying). But even Richard Dawkins has admitted that he could believe in God if he had no other choice. (I have a source if you want.) Certainly, if you're completely determined not to believe no matter what—if you would refuse God even if He appeared to you himself—then you never will. But if there is absolutely nothing that would convince you, then you're giving it a chance of 0. Since you are rationalists, you can't have it actually be 0. So what is that 0.0001 that would convince you?
9TheOtherDave11y
There's a big difference between "no matter what" and "if He appeared to you himself," especially if by the latter you mean appearing to my senses. I mean, the immediate anecdotal evidence of my senses is far from being the most convincing form of evidence in my world; there are many things I'm confident exist without having directly perceived them, and some things I've directly perceived I'm confident don't exist. For example, a being possessing the powers attributed to YHWH in the Old Testament, or to Jesus in the New Testament, could simply grant me faith directly -- that is, directly raising my confidence in that being's existence. If YHWH or Jesus (or some other powerful entity) appeared to me that way, I would believe in them. I'm assuming you're not counting that as convincing me, though I'm not sure why not. Actually, that isn't true. It might well be that I assign a positive probability to X, but that I still can't rationally reach a state of >50% confidence in X, because the kind of evidence that would motivate such a confidence-shift simply isn't available to me. I am a limited mortal being with bounded cognition, not all truths are available to me just because they're true. But it may be that with respect to the specific belief you're asking about, the situation isn't even that bad. I don't know, because I'm not really sure what specific belief you're asking about. What is it, exactly, that you want to know how to convince me of? That is... are you asking what would convince me in the existence of YHWH, Creator of the Universe, the God of my fathers and my forefathers, who lifted them up from bondage in Egypt with a mighty hand an an outstretched arm, and through his prophet Moses led them to Sinai where he bequeathed to them his Law? Or what would convince me of the existence of Jesus Christ, the only begotten Son of God, who was born a man and died for our sins, that those who believe in Him would not die but have eternal life? Or what would con

With respect to those in particular, I can't think of any experience off-hand which would raise my confidence in any of them high enough to be worth considering, though that's not to say that such experiences don't exist or aren't possible... I just don't know what they are.

Huh. That's interesting. For at least the first two I can think of a few that would convince me, and for the third I suspect that a lack of being easily able to be convinced is connected more to my lack of knowledge about the religion in question. In the most obvious way for YHVH, if everyone everywhere started hearing a loud shofar blowing and then the dead rose, and then an extremely educated fellow claiming to be Elijah showed up and started answering every halachic question in ways that resolve all the apparent problems, I think I'd be paying close attention to the hypothesis.

Similar remarks apply for Jesus. They do seem to depend strongly on making much more blatant interventions in the world then the deities generally seem to (outside their holy texts).

Technically the shofar blowing thing should not be enough sensory evidence to convince you of the prior improbability of this being the God - probability of alien teenagers, etcetera - but since you weren't expecting that to happen and other people were, good rationalist procedure would be to listen very carefully what they had to say about how your priors might've been mistaken. It could still be alien teenagers but you really ought to give somebody a chance to explain to you about how it's not. On the other hand, we can't execute this sort of super-update until we actually see the evidence, so meanwhile the prior probability remains astronomically low.

9Desrtopa11y
In this context I think it makes sense to ask "showed up where?" but if the answer were "everywhere on earth at once," I'd call that pretty damn compelling.
3TheOtherDave11y
Not to mention crowded.
4TheOtherDave11y
Yeah, you're right, "to be worth considering" is hyperbole. On balance I'd still lean towards "powerful entity whom I have no reason to believe created the universe, probably didn't lift my forefathers up from bondage in Egypt, might have bequeathed them his Law, and for reasons of its own is adopting the trappings of YHWH" but I would, as you say, be paying close attention to alternative hypotheses. Fixed.
-1[anonymous]11y
You're right, I'm assuming that God doesn't just tweak anyone's mind to force them to believe, because the God of the Abrahamic religions won't ever do that—our ultimate agency to believe or not is very important to Him. What would be the point of seven billion mindless minions? (OK, it might be fun for a while, but I bet sentient children would be more interesting over the course of, say, eternity.)
1TheOtherDave11y
As I said at the time, it hadn't been clear when I wrote the comment that you meant, specifically, the God of the Abrahamic religions when you talked about God. I've since read your comments elsewhere about Mormonism, which made it clearer that there's a specific denomination's traditional beliefs about the universe you're looking to defend, and not just beliefs in the existence of a God more generally. And, sure, given that you're looking for compelling arguments that defend your pre-existing beliefs, including specific claims about God's values as well as God's existence, history, powers, personality, relationships to particular human beings, and so forth, then it makes sense to reject ideas that seem inconsistent with those epistemic pre-commitments. That's quite a given, though.
-3[anonymous]11y
If you do assume that God can (and does) just reach in and tweak our minds directly, then being "convinced" takes on a sort of strange meaning. Unless we're assuming that you remain in normal control of your own mind, the concepts of "choice," "opinion," and "me" sort of start to disappear. I'm trying to talk about a deity in general, but you're right, it often turns into the God we're all familiar with. A radically different deity could uproot every part of the way we think about things, even logic and reason itself. So in order to stay within our own universe, I think it's OK to assume that any God only intervenes to the extent that we usually hear about, like Old Testament miracles.
3TheOtherDave11y
Wait... you endorse rejecting the lived experience of millions of people whose conception of deity is radically different from yours, on the grounds that to do otherwise could uproot logic, reason, and every part of the way we think about things? Wow. Um... I genuinely don't mean to be offensive, but I don't know a polite way to say this: if I understood that correctly, I just lost all interest in discussing this subject with you. You seemed to be arguing a while back that our precommitments to "the way we think about things" were not sufficient grounds to reject uncomfortable or difficult ideas, which is a position I can respect, though I think it's importantly though subtly false. But now you just seem to be saying that we should not respect such precommitments when they interfere with accepting some beliefs, such as one popular conception of deity, while considering them sufficient grounds to reject others, such as different popular conceptions of deity. Which seems to bring us all the way back around to the idea that an "atheist" is merely someone who treats my God the way I treat everyone else's God, which is boring. Have I misunderstood you?
-3[anonymous]11y
Probably you have, unfortunately. Give me a few minutes to figure it out...this is getting confusing.
0TheOtherDave11y
OK. No worries; no hurries... I'll consider this branch paused pending re-evaluation. Take your time.
-3[anonymous]11y
So it seems like what we were actually talking about here was how thoroughly God could convince a human of His existence, and you suggested he could just raise your faith level directly. Here's the problem I have with that: I don't know about Odin, but the YHWH we were raised with doesn't (could, but doesn't) ever do that. I wouldn't really call it faith if you have no choice in the matter. But I recognize that free agency is a very important tenet of my religion and important to my understanding of the universe given that my religion is correct. (I still don't quite understand free choice, which I'll have to figure out sometime in the next few years, but that's my own issue.) Thus, a radically different deity is at odds with my view of the universe. This probably means that I ought to go looking for radically different deities which will challenge my universe, but for now I don't know of any (except maybe simulation hypotheses, which I like a lot). But for the purposes of this discussion—which, remember, was only about how spectacular a manifestation it would take to make you believe—I said it would be easier to stick to a God that doesn't intervene to the point of directly tampering with our neurons. You had a problem with this. OK, sorry—let's also think about a fundamentally different God. I think that an effectively all-powerful being could easily just reach in and rearrange our circuits such that we know it exists. Sure it could happen. As I think I told someone, I don't see why—having seven billion mindless minions would get old after a while—but I have no right to go questioning the motives of a deity, especially one that's radically different from the one I'm told I'm modeled after. I'm sorry, I never meant to dismiss the possibility of radically different religions. You're right, that would be awfully silly coming from me. Now then. This sounds very interesting, what do you mean?
3TheOtherDave11y
I recommend you prioritize clarifying your confusions surrounding "free choice" higher than you seem to be doing. In particular, I observe that our circuits have demonstrably been arranged such that we find certain propositions, sources of value, and courses of action (call them C1) significantly (and in some cases overwhelmingly) more compelling than other propositions, sources of value, and courses of action (C2). For example (and trivially), C1 includes "I have a physical body" and C2 includes "I don't have a physical body". If we were designed by a deity, it follows that this deity in fact designed us to be predisposed to accept C1 and not accept C2. A concept of free agency that allows for stacking the deck so overwhelmingly in support of C1 over C2, but does not allow for including in C1 "YHWH as portrayed in the Book of Mormon, other texts included by reference in the Book of Mormon, and subsequent revelations granted to the line of Mormon Prophets by YHWH", seems like an important concept to clarify, if only because it sounds so very contrived on the face of it. Well, for example, consider the proposition (Pj) that YHWH as conceived of and worshiped by 20th-century Orthodox Jews of my family's tradition exists. As a child, I was taught Pj and believed it (which incidentally entailed other things, for example, such as Jesus Christ not being the Messiah). As a teenager re-evaluated the evidence I had for and against Pj and concluded that my confidence in NOT(Pj) was higher than my confidence in Pj. Had someone said to me at that time "Dave, I realize that your evaluation of the evidence presented by your experience of the world leads you to high confidence in certain propositions which you consider logically inconsistent with Pj, but I caution you not to become so thoroughly precommitted to the methods by which you perform those evaluations that you cannot seriously consider alternative ways of evaluating evidence," that would intuitively feel like a sen
-2[anonymous]11y
Interesting. I'll keep thinking about it. But just to clarify, what exactly was it I said that was subtly but importantly wrong? This is what EY says about "uncomfortable or difficult ideas:" "When you're doubting one of your most cherished beliefs, close your eyes, empty your mind, grit your teeth, and deliberately think about whatever hurts the most. Don't rehearse standard objections whose standard counters would make you feel better. Ask yourself what smart people who disagree would say to your first reply, and your second reply. Whenever you catch yourself flinching away from an objection you fleetingly thought of, drag it out into the forefront of your mind."
3TheOtherDave11y
Like I said, I thought you were arguing a while back that our precommitments to "the way we think about things" were not sufficient grounds to reject uncomfortable or difficult ideas, which is an idea I respect (for reasons similar to those articulated in the post you quote) but consider subtly but importantly wrong (for reasons similar to those I articulate in the comment you reply to). I'll note, also, that an epistemic methodology (a way of thinking about things) isn't the same thing as a belief.
7JoshuaZ11y
And if it were a demon? A ghost? A fairy? A Greek deity? If these are different, why are they different? What about an angel that 's from another religion?

The optimal situation for you is that you've heard intelligent, thoughtful, rational criticism but your position remains strong.

The optimal situation could also be hearing intelligent, thoughtful, rational criticism, learn from it and having a new 'strong position' incorporating the new information. (See: lightness).

I sometimes see refutations of pro-religious arguments on this site, but no refutations of good arguments.

What good arguments do you think LW hasn't talked about?

My point in posting this is simply to ask you—what, in your opinion, are the most legitimate criticisms of your own way of thinking?

Religion holds an important social and cultural role that the various attempts at rationalist ritual or culture haven't fully succeeded at filling yet.

7[anonymous]11y
The 2012 survey showed something around 10% non-atheist, non-agnostic. From most plausible to least plausible: * It's possible to formulate something like an argument that religious practice is good for neurotypical humans, in terms of increasing life expectancy, reducing stress, and so on. * Monocultures tend to do better than populations with mixed cultural heritage, and one could argue that some religions do very well at creating monocultures where none previously existed, e.g., the mormons, or perhaps the Catholic Church circa 1800 in the states. * I've heard some reports that religious affiliation is good for one's dating pool.
0[anonymous]11y
See, but these are only arguments that religion is useful. Rationalists on this site say that religion is most definitely false, even if it's useful; are there any rational thinkers out there who actually think that religion could realistically be true? I think that's a much harder question that whether or not it's good for us.
1[anonymous]11y
Yes.
-4[anonymous]11y
This is great, thanks. I know there must be people out there, but I'm not entirely convinced most atheists ever bother to actually consider a real possibility of God.
8[anonymous]11y
I no longer have any idea what evidence would convince you otherwise.
-2[anonymous]11y
Rationalists who take religion seriously, for instance.

Take seriously in what sense?

For instance, I spent about six years seriously studying up on religions and theology, because I figured that if there were any sort of supreme being concerned with the actions of humankind, that would be one of the most important facts I could possibly know. So in that sense, I take religion very seriously. But in the sense of believing that any religion has a non-negligible chance of accurately describing reality, I don't take it seriously at all, because I feel that the weight of evidence is overwhelmingly against that being the case.

What sense of "taking religion seriously" are you looking for examples of?

-1[anonymous]11y
That's what I mean—a non-negligible chance. If your estimation of the likelihood of God is negligible, then it may as well be zero. I don't think that there is an overwhelming weight of evidence toward either case, and I don't think this is something that science can resolve.

If your estimation of the likelihood of God is negligible, then it may as well be zero.

This doesn't follow. For example, if you recite to me a 17 million digit number, my estimate that it is a prime is about 1 in a million by the prime number theorem. But, if I then find out that the number was in fact 2^57,885,161 -1, my estimate for it being prime goes up by a lot. So one can assign very small probabilities to things and still update strongly on evidence.

6Desrtopa11y
Why not?
2Intrism11y
So, you're saying that in your view no atheist could possibly take the question of the truth of religion seriously? Or, alternately, that one could be an atheist but still give a large probability of God's existence? Both of these seem a bit bizarre...
3[anonymous]11y
See my first comment in this thread. There's a 10% minority that takes religion seriously. Presumably some of them consider themselves rationalists, or else they wouldn't bother responding to the survey.
7gwern11y
You may find this helpful: http://prosblogion.ektopos.com/archives/2012/02/results-of-the-.html
-2[anonymous]11y
This is interesting. It shouldn't be surprising coming from philosophers, but it can be instructional anyway. There are as many atheists who have never heard a decent defense of religion as there are religious fundamentalists who have never bothered to think rationally.

There are as many atheists who have never heard a decent defense of religion as there are religious fundamentalists who have never bothered to think rationally.

This seems improbable, considering that there are vastly more religious people than atheists.

-3[anonymous]11y
Props for being technical. You know what I meant.

Even in the non-technical sense, he's still making a relevant counterpoint, because it's much, much harder for atheists to go without exposure to religious culture and arguments than for a religious person to go without exposure to atheist arguments or culture (insofar as such a thing can be said to exist.)

2[anonymous]11y
I don't just mean being exposed to religious culture and arguments, I mean good arguments. I know, practically everyone here was raised religious and given really bad reasons to believe. But I think those may become a straw dummy—what I'm skeptical of is how many people here have heard a religious argument that actually made them think, one that has a chance in a real debate.
[-][anonymous]11y150

one that has a chance in a real debate.

good arguments don't in general have a chance in a real debate, because debates are not about reasoning. But that's a nitpick.

I've seen a lot of religious people claiming to have access to strong arguments for theism, but have never seen one myself.

As JoshuaZ asks, you must have a strong argument or you wouldn't think this line of discussion was worth anything. What is it?

I'm going to second JoshuaZ here. There's a lot of disagreement among theists about what the best arguments for theism are. I'd rather not try to represent any particular argument as the best one available for theism, because I can't think of anything that theists would universally agree on as a good argument, and I don't endorse any of the arguments myself.

I would say that most atheists are at least exposed to arguments that apologists of some standing, such as C.S. Lewis or William Lane Craig, actually use.

I mean good arguments.

So why not present what you think these good arguments are?

0Zaine11y
A-causal blackmail, once I thought deeply about why it might be scary. Took about an hour to refute it (to my satisfaction) - whether it would have a chance in a 'real debate': debate length, forum, allotted quiet thinking time and other confounds make me uncertain of your intended meaning.
5Bugmaster11y
I'm much closer to "below average" than to the "top" as far as LW users go, but I'll give it a shot anyway. I assume that by "way of thinking" you mean "atheism", specifically (if not, what did you mean ?). I don't know how you judge which criticisms are "legitimate", so I can't answer the question directly. Instead, I can say that the most persuasive arguments against atheism that I'd personally seen come in form of studies demonstrating the efficacy of prayer. If prayer does work consistently with the claims of some religion, this is a good indication that at least some claims made by the religion are true. Note, though, that I said "most persuasive"; another way to put it would be "least unpersuasive". Unfortunately, all such studies that I know of have either found no correlation between prayer and the desired effect whatsoever; or were constructed so poorly that their results are meaningless. Still, at least they tried. In general, it is more difficult to argue against atheism (of the weak kind) than against theism, since (weak) atheism is simply the null hypothesis. This means that theists must provide positive evidence for the existence of their god(s) in order to convince an atheist, and this is very difficult to do when one's god is undetectable, or works in mysterious ways, or is absent, etc., as most gods tend to be.
-1[anonymous]11y
Many people would disagree that atheism is the null hypothesis. "All things testify of Christ," as some say, and in those circles people honestly believe they've been personally contacted by God. (I'm talking about Mormons, whose God, from what I've heard, is not remotely undetectable.) Have most atheists honestly put thought into what if there actually was a God? Many won't even accept that there is a possibility, and I think this is just as dangerous as blind faith.

Have most atheists honestly put thought into what if there actually was a God?

Don't know. Most probably have something better to do. I have thought about what would happen if there was a God. If it turned out the the god of the religion I was brought up in was real then I would be destined to burn in hell for eternity. If version 1 of the same god (Yahweh) existed I'd probably also burn in hell for eternity but I'm a bit less certain about that because the first half of my Bible talked more about punishing people while alive (well, at the start of the stoning they are alive at least) than the threat of torment after death. If Alah is real... well, I'm guessing there is going to be more eternal pain involved since that is just another fork of the same counterfactual omnipotent psychopath. Maybe I'd have more luck with the religions from ancient India---so long as I can convince the gods that lesswrong Karma counts.

So yes, I've given some thought to what happens if God exists: I'd be screwed and God would still be a total dick of no moral worth.

Many won't even accept that there is a possibility, and I think this is just as dangerous as blind faith.

Assigning probability 0 or 1 ... (read more)

-1[anonymous]11y
So, with no evidence either way, would you honestly rate the probability of the existence of God as 0.0001%?
4wedrifid11y
That probability is off by a factor of 100 from the one I mentioned. (And with 'no evidence either way' the probability assigned would be far, far lower than that. It takes rather a lot of evidence to even find your God in hypothesis space.)
0JoshuaZ11y
In which direction?
1wedrifid11y
I mentioned 0, 1 and 0.0001. Ibidem asked about 0.0001%. That's 100 times lower.
0JoshuaZ11y
Ah, sorry. I misread your statement as talking about a prior rather than with the evidence at hand and didn't notice the percentage mark. Your edited comment is more clear.
-3[anonymous]11y
You're right, I'm sorry. It was 0.0001. That's still pretty small, though. Is that really what you think it is? Don't think of my God, then. Any deity at all. Do we want to be Bayesian about it? Of course we do. Let's imagine two universes. One formed spontaneously, one was created. Which is more likely to occur? Personally I think that the created one seems more likely. Apparently you think that the spontaneity is more believable. But as for the probability that any given universe is created rather than accidental, 0.0001 seems unrealistically low. And if that's not the number you actually believe—it was just an example—what is?

Do we want to be Bayesian about it? Of course we do. Let's imagine two universes. One formed spontaneously, one was created. Which is more likely to occur?

It isn't obvious that this is at all meaningful, and gets quickly into deep issues of anthropics and observer effects. But aside from that, there's some intuition here that you seem to be using that may not be shared. Moreover, it also has the weird issue that most forms of theism have a deity that is omnipotent and so should exist over all universes.

Note also that the difference isn't just spontaneity v. created. What does it mean for a universe to be created? And what does it mean to call that creating aspect a deity? One of the major problems with first cause arguments and similar notions is that even when one buys into them it is extremely difficult to jump from their to theism. Relevant SMBC.

0[anonymous]11y
Certainly this is a tough issue, and words get confusing really quickly. What intuition am I not sharing? Sorry if by "universe" I meant scenario or existence or something that contains God when there is one. What I mean by "deity" and "created" is that either there is a conscious, intelligent mind (I think we all agree what that means) organizing our world/universe/reality, or there isn't. And of course I'm not trying to sell you on my particular religion. I'm just trying to point out that I think there's not any more inherent reason to believe there is no deity than to believe there is one.

What I mean by "deity" and "created" is that either there is a conscious, intelligent mind (I think we all agree what that means) organizing our world/universe/reality, or there isn't.

Ok. So in this context, why do you think that one universe is more likely than the other? It may help to state where "conscious" and "intelligent" and "mind" come into this argument.

And of course I'm not trying to sell you on my particular religion.

On the contrary, that shouldn't be an "of course". If you sincerely believe and think you have the evidence for a particular religion, you should present it. If you don't have that evidence, then you should adjust your beliefs.

Even if one thinks one is in a constructed universe, it in no way follows that the constructor is divine or has any other aspects one normally associates with a deity. For example, this universe could be the equivalent of a project for a 12 dimensional grad student in a wildly different universe (ok, that might be a bit much- it might just be by an 11 -dimensional bright undergrad).

I'm just trying to point out that I think there's not any more inherent reas

... (read more)
0[anonymous]11y
How about this, from Mormon user calcsam: Seems legit to me.
-1[anonymous]11y
I'd actually consider that deity in the sense of a conscious, intelligent being who created the universe intentionally. As opposed to it happening by cosmic hazard. (That is, no conscious creator.)
2JoshuaZ11y
Would you assign that being any of the traits normally connected to being a deity? For example, if the 11 dimensional undergrad say not to eat shellfish, or to wear special undergarments, would you listen?
-1[anonymous]11y
Yes, I would listen if was confident that was where it was coming from. This 11-dimensional undergrad is much more powerful and almost certainly smarter than me, and knowingly rebelling would not be a good idea. If this undergrad just has a really sick sense of humor, then, well, we're all screwed in any case.
2JoshuaZ11y
And if the 11-dimensional undergrad says you should torture a baby?
-4[anonymous]11y
Clearly, then I need to make awfully sure it's actually God and not a hallucination. I would probably not do it because in that case I know that the undergrad does have a sick sense of humor and I shouldn't listen to him because we're all screwed anyway. Now, if you're going to bring up Abraham and Isaac or something like that, remember that in this case Abraham was pretty darn sure it was actually God talking.
6JoshuaZ11y
So this sort of response indicates that you are distinguishing between "God" and the 11-dimensional undergrad as distinct ideas. In that case, a generic creator argument isn't very strong evidence since there are a lot of options for entities that created the universe that aren't God.
-3[anonymous]11y
This is confusing because we're simultaneously talking about a deity in general and my God, the one we're all familiar with. Of course there are lots of options other than my specific God; the 11-dimensional undergrad is one of those. I'm not using a generic creator argument to convince you of my God, I'm using the generic creator argument to suggest that you take into account the possibility of a generic creator, whether or not it's my God. I'm keeping my God mostly out of this—I think an atheist ought to be able to argue my position while keeping his/her own conclusions.
3Bugmaster11y
As JoshuaZ says, there's no "of course" about it. If some particular religion is right and I am wrong, then I absolutely want to know about it ! So if you have some evidence to present, please do so.
-3[anonymous]11y
I think that my religion is right and you are misguided. I really do, for reasons of my own. But I don't have any "evidence" to share with you, especially if you are committed to explaining it away as you may not be but many people here are. Remember that my original question was just to see where this community stood. I don't have all that many grand answers myself. I suppose I could actually say that if you honestly absolutely want to know and are willing to open your mind, then you should try reading this book—I'm serious, but I'm aware how silly that would sound in such a context as this. Really, I don't want to become that guy. I'm young, and I myself am trying to find good, rational arguments in favor of God. I'm trying to reconcile rationality and religion in my mind, and if I can't find anyone online, I'll figure it out myself and write a blog post about it in twenty years. But what it seems I've found is that no, most of the people on this site (based on my representative sample of about a dozen, I know) have never been presented with solid arguments in favor of religion. Maybe I'll manage to find some or write them myself, and maybe I'll decide that the population of Less Wrong is as closed-minded as I feared. In any case, thank you for being more open than certain others.

But I don't have any "evidence" to share with you, especially if you are committed to explaining it away as you may not be but many people here are.

So this is a problem. In general, there are types of claims that don't easily have shared evidence (e.g. last night I had a dream that was really cool, but I forgot it almost as soon as I woke up, I love my girlfriend, when I was about 6 years old I got the idea of aliens who could only see invisible things but not visible things, etc.) But most claims, especially claims about what we expect of reality around us should depend on evidence that can be shared.

I'm young, and I myself am trying to find good, rational arguments in favor of God.

So this is already a serious mistake. One shouldn't try to find rational arguments in favor of one thing or another. One should find the best evidence for and against a claim, and then judge the claim based on that.

have never been presented with solid arguments in favor of religion. Maybe I'll manage to find some or write them myself, and maybe I'll decide that the population of Less Wrong is as closed-minded as I feared.

You may want to seriously consider that the arguments you a... (read more)

2Kawoomba11y
Yea!
0[anonymous]11y
Note that my expressed intention in this post was not to start a religious debate, though I have enjoyed that too. I have considered that the arguments I'm looking for don't exist; what I've found is that at least you guys don't have any, which means that from your position this case is entirely one-sided. So generally, your belief that religion is inherently ridiculous from a rationalist standpoint has never actually been challenged at all. Definitely it's been interesting. Thanks.
3khafra11y
If you really want rationalist (more properly, post-rationalist) arguments in favor of God, I recommend looking through Will Newsome's comments from a few years ago; also through his twitter accounts @willnewsome and @willdoingthings. If you follow my advice, though, may God have mercy on your soul; because Will Newsome will have none on your psychological health.
-2[anonymous]11y
Thanks for the reference; someone else mentioned him and I've enjoyed the blog it led me to, but I didn't think to look through his comments.
8Intrism11y
Ah, no, haven't you read the How to Actually Change Your Mind sequence? Or at least the Against Rationalization subsequence and The Bottom Line? You can't just decide "I want to prove the existence of God" and then write a rational argument. You can't start with the bottom line. Really, read the sequence, or at least the subsequence I pointed out. I wasn't under the impression that the Book of Mormon was substantially more convincing than any other religious holy book. I have, however, heard that the Mormon church does exceptionally well at building a community. If you'd like to talk about that, I'd be extremely interested. How sure are you that more solid arguments exist? We don't know about them. You apparently don't know about them. If you've got any that you're hiding, remember that if God actually exists we would really like to know about it; we don't want to explain anything away that isn't wrong.
-1[anonymous]11y
Yes, I have read the sequence. I think that not being one-sided sometimes requires a conscious effort, and is a worthwhile cause. Of course you won't read the Book of Mormon. I wouldn't expect you to. But if you want "evidence" which has firmly convinced millions of people—here it is. I personally have found it more powerful than the Bible or Qur'an. You're right, I don't have any solid arguments in favor of religion. My original question of this post was actually just to ask if you had any—and I've gotten an answer. No, you believe there are none. I've shown you one source that convinces a lot of people; consider yourself to know about it. I would recommend reading it, too, if you're really interesting in finding the truth.
3Desrtopa11y
Have you read the Quran in the original Arabic? It's pretty famously considered to lose a lot in translation. I haven't, of course, but the only ex-muslim I've spoken to about it agrees that even in the absence of his religious belief, it's a much more powerful and poetic work in Arabic.
1[anonymous]11y
Working on it :) I can sometimes actually understand entire verses but it is in fact a goal of mine. I'd think it must lose a lot in translation.
3Richard_Kennaway11y
Can you expand on that? What is this perception of "power" you get in varying degrees from such books, and what is the relation between that sensation and deciding whether anything in those books is true? I've read the Bible and the Qur'an, and while I haven't read the Book of Mormon, I have a copy (souvenir of a visit to Salt Lake City). I'll have a look at it if you like, but I'm not expecting much, because of the sort of thing that books like these are. Neither the Bible nor the Qur'an convince me that any of the events recounted in them ever happened, or that any of the supernatural entities they talk about ever existed, or that their various moral prescriptions should be followed simply because they appear there. How could they? A large part of the Bible is purported history, and to do history right you can't rely on a single collection of old and multiply-translated documents which don't amount to a primary source for much beyond their own existence, especially when archaeology (so I understand) doesn't turn up all that much to substantiate it. And things like the Genesis mythology are just mythology. The world was not created in six days. Proverbs, Wisdom, the "whatsoever things..." passage, and so on, fine: but I read them in the same spirit as reading the rationality quote threads here. Where there be any virtue, indeed. The Qur'an consists primarily of injunctions to believe and imprecations against unbelievers. I'm not going to swallow that just because of its aggressive manner. So, that is my approach to religious documents. This "power" that leads many people to convert to a religion, that gives successful missionaries thousands of converts in a single day: I have to admit that I have no idea what experience people are talking about. Why would reading a book or tract open my eyes to the truth? Especially if I have reason to think that the authors were not engaged in any sort of rational inquiry? That is, BTW, also my approach to non-religious docum
4Desrtopa11y
What's strange about converting from one idea to another by reading a book? A book can contain a lot of information. Sometimes it doesn't even take very much to change one's mind. Suppose a person believes that the continents can't be shifting, because there's no room for them to move around on a solid sphere. Then they read about subduction zones and mid-ocean ridges, and see a diagram of plate movement around the world, and think "Oh, I guess it can happen that way, how silly of me not to have thought of that." I haven't found any religious text convincing, because they tend to be heavy on constructing a thematic message and providing social motivation to believe, light on evidence, but for a lot of people that's a normal way to become convinced of things (indeed, I recently finished reading a book where the author discussed how, among the tribe he studied, convincing people of a proposition was almost entirely a matter of how powerful a claim you were prepared to make and what authority you could muster, rather than what evidence you could present or how probable your claim was.)
7TheOtherDave11y
I suspect this was also true of the tribe I went to high-school with.
-4[anonymous]11y
I know how most atheists feel about the Bible. Really, I do. But if you don't understand what's so powerful about a book, and you want to know, then you really should give it a try—I might say that the last chapter of Moroni especially addresses this. (I promise I'm not trying to convert you. I don't remotely expect you to have a spiritual experience because of this one chapter.) Yes, it's easy to compare religion and atheism to each other as well as professional sports and a lot of other human behaviors. I'm all for free thought and not being persuaded by powerful words alone. However, just as I try to be able to enjoy ridiculous sports games, I'm glad to understand why people believe what they do.
3Richard_Kennaway11y
Well, I've now read the last chapter of Moroni, which is the last book of the Book of Mormon. The prophet takes his leave of his people, promises that God, the Son, and the Holy Ghost will reveal the truth of these things to those who sincerely pray, enjoins them to practice faith, hope, and charity and avoid despair, and promises to see them in the hereafter. I don't feel any urge to read this as other than fiction.
-7[anonymous]11y
1BerryPick611y
I grew up on the Bible. I studied the Bible for over a decade. I have read the Old Testament in Hebrew. It's the most boring thing I've ever laid eyes on.
3[anonymous]11y
I'll agree with that, some parts of it are incredibly boring. (Though some parts could make an awesome action flick.)
3Desrtopa11y
I've always marveled at peoples' assertions that, even if they don't believe the bible is the word of God, they still respect it as a great work of literature. I suspect that they really do believe it, humans can invest a whole lot of positive associations with things simply through expectation and social conditioning. But my opinion of it as a literary work is low enough that I have a hard time coming up with any sort of of comparison which doesn't make it sound like I'm making a deliberate effort to mock religious people.
8Bugmaster11y
I was honest when I said that I'd love to see some convincing evidence for the existence of any god. If you have some, then by all means, please present it. However, if I look at your evidence and find that it is insufficient to convince me, this does not necessarily mean that I'm closed-minded (though I still could be, of course). It could also mean that your reasoning is flawed, or that your observations can be more parsimoniously explained by a cause other than a god. A big part of being rational is learning to work around your own biases. Consider this: if you can't find any solid arguments for the existence of your particular version of God... is it possible that there simply aren't any ?
3[anonymous]11y
Yes, it's possible that there aren't any. That makes your beliefs much, much simpler. But I think that it's much safer and healthier to assume that you just haven't been exposed to any yet. I can't call you closed-minded for not having been exposed, and I'm sure that if some good arguments did pop up you at least would be willing to hear them. I'm sorry that I don't myself have any; I'm going to keep looking for a few years, if you don't mind.
5drethelin11y
I do mind. If you look for a few years for "rational" arguments for Mormonism you will be wasting your life duplicating the effort of thousands of people before you. Please don't. Even if you remain Mormon, there are far better things you can do than theology.
1[anonymous]11y
What should I spend my next few years of rationalism doing then? It seems that according to you, my options are a) leave my religion in favor of rationalism. (feel free to tell me this, but if my parents find out about it they'll be worried and start telling me you're a satanic cult. I can handle it.) b) leave rationalism in favor of religion. (not likely. I could leave Less Wrong if it's not open-minded enough, but I won't renounce rational thinking.) c) learn to live with the conflict in my mind. Suggestions?
4drethelin11y
In descending order of my preference: a, c, then b. I think c is the path chosen by most people who are reasonable but want to remain religious. C is much more feasible if you can happily devote your time to causes other than religion/rationality. math, science, writing, art, I think all are better for you and society than theology
-3[anonymous]11y
C seems likely as a long-term solution, because I don't see a or b as very realistic right now. And even if I don't make it a focused pursuit, I'll still be on the lookout for option d. (I'm not seriously interested in theology, don't worry. I'm quite into math and such things.)
2Vladimir_Nesov11y
These are not "options", but possible outcomes. You shouldn't decide to work on reaching a particular conclusion, that would filter the arguments you encounter. Ignore these whole "religion" and "rationality" abstractions, work on figuring out more specific questions that you can understand reliably.
2shminux11y
That's not either/or. Plenty of participants here are quietly religious (I don't recall what the last survey said), yet they like the site for what it has to offer. It may well happen some day that some of the sequence posts will click in a way that would make you want to decide to distance yourself from your fellow saints. Or it might not. If you find some discussion topics which interest you more, then just enjoy those. As I mentioned originally, pure logical discourse is rarely the way to change deep-seated opinions and preferences. Those evolve as your subconscious mind integrates new ideas and experiences.
-2[anonymous]11y
Yes, that's what I think I'll do. But many people here seem to be telling me that's impossible without some sort of cognitive dissonance. I don't think so.
1shminux11y
"People here" are not perfectly rational and prone to other-optimizing. Including yours truly. Even the fearless leader has a few gaping holes in his rationality, and he's done pretty well. I don't know which of his and others' ideas speak to you the most, but apparently some do, so why not enjoy them. If anything, the spirit of altruism and care for others, so prominent on this forum, seems to fit well with Mormon practice, as far as I know.
-8[anonymous]11y
2TheOtherDave11y
My recommendation is that you commit to/remain committed to basing your confidence in propositions on evaluations of evidence for and against those propositions. If that leads you to conclude that LessWrong is a bad place to spend time, don't spend time here. If that leads you to conclude that your religious instruction has included some falsehoods, stop believing those falsehoods. If it leads you to conclude that your religious instruction was on the whole reliable and accurate, continue believing it. If it leads you to conclude that LessWrong is a good place to spend time, keep spending time here.
2Bugmaster11y
At what point do I stop looking, though ? For example, a few days ago I lost my favorite flashlight (true story). I searched my entire apartment for about an hour, but finally gave up; my guess is that I left it somewhere while I was hiking. I am pretty sure that the flashlight is not, in fact, inside my apartment... but should I keep looking until I'd turned over every atom ?
0[anonymous]11y
You stop looking when you decide it's no longer helpful, obviously. You've stopped looking, and I'm not blaming you for that. I am still looking.
1Bugmaster11y
Fair enough; I wish you luck in your search.
5Bugmaster11y
As for the Book of Mormon... try to think of it this way. Imagine that, tomorrow, you meet aliens from a faraway star system. The aliens look like giant jellyfish, and are in fact aquatic; needless to say, they grew up in a culture radically different from ours. While this alien species does possess science and technology (or else they wouldn't make it all the way to Earth !), they have no concept of "religion". They do, however, have a concept of fiction (as well as non-fiction, of course, or else they wouldn't have developed science). The aliens have studied our radio transmissions, translated our language, and downloaded a copy of the entire Web; this was easy for them since their computers are much more powerful than ours. So, the aliens have access to all of our literature, movies, and other media; but they have a tough time making sense of some of it. For example, they are pretty sure that the Oracle SQL Manual is non-fiction (they pirated a copy of Oracle, and it worked). They are also pretty sure that Little Red Riding Hood is fiction (they checked, and they're pretty sure that wolves can't talk). But what about a film like Lawrence of Arabia ? Is that fiction ? The aliens aren't sure. One of the aliens comes to you, waving a copy of The Book of Mormon (or whichever scripture you believe in) in its tentacles (but in a friendly kind of way). It asks you to clarify: is this book fiction, or non-fiction ? If it contains both fictional and non-fictional passages, which are which ? Right now, the alien is leaning toward "fiction" (it checked, and snakes can't talk), but, with us humans, one can never be sure. What do you tell the alien ?
0[anonymous]11y
a) I would tell them it's non-fiction. Are Yudkowsky's posts fiction or non-fiction? What about the ones where he tells clearly made-up instructional stories? b) No need to bash the Book of Mormon. I'm fully aware how you people feel about it. But— you did in fact ask.
6Bugmaster11y
It was not my intent to bash the Book of Mormon specifically; I just used it as a convenient stand-in for "whichever holy scripture you believe in". Speaking of which: The alien spreads its tentacles in confusion, then pulls out a stack of books from the storage compartment of its exo-suit. "What about all these other ones ?", it asks. You recognize the Koran, the Bhagavad Gita, Enuma Elish, the King James Bible, and the Nordic Eddas; you can tell by the way the alien's suit is bulging that it's got a bunch more books in there. The alien says (or rather, its translation software says for it), "We can usually tell the difference between fiction and non-fiction. For example, your fellow human Yudkowsky wrote a lot of non-fictional articles about things like ethics and epistemology, but he also wrote fictional stories such as Three Worlds Collide. In that, he is similar to [unpronounceable], the author on our own world who wrote about imaginary worlds in order to raise awareness his ideas concerning [untranslateable] and [untranslateable], which is now the basis of our FTL drive. Sort of like your own Aesop, in fact. But these books", -- the alien waves some of its tentacles at the huge stack -- "are confusing our software. Their structure and content contains many elements that are usually found only in fiction; for example, talking animals, magical powers, birds bigger than mountains, some sort of humanoids beings that are said to live in the skies or at the top of tall mountains or perhaps in orbit, shapeshifters, and so on. We checked, and none of those things exist in real life. But then, we talked to other humans such as yourself, and they told us that some of these books are true in a literal sense. Oddly enough, each human seems to think that one particular book is true, and all the others are fictional or allegorical, but groups of humans passionately disagree about which book is true, as well as about the meaning of individual passages. Thus, we [unprono
0[anonymous]11y
Funny, I could swear someone already asked me that, and I gave them an answer. I'll see if I can find the specific thread...
5Prismattic11y
You are privileging the hypothesis of (presumably one specific strain of) monotheism. That is not actually a rational approach. The kind of question a rationalist would ask is not "does God exist?" but "what should I think about cosmology" or "what should I think about ethics?" First you examine the universe around you, and then you come up with hypotheses to see how well they match that. If you don't start from the incorrectly narrow hypothesis space of [your strain of monotheism, secular cosmology acccording to the best guesses of early 21st century science], you end up with a much lower probability for your religion being true, even if science turns out to be mistaken about the particulars of the cosmology. Put another way: What probability do you assign to Norse mythology being correct? And how well would you respond if someone told you you were being closed-minded because you'd never heard a solid argument for Thor?
-1[anonymous]11y
I'm sorry if you feel that I've called you closed-minded, no personal offense was intended. But it's a bit worrisome when a community as a whole has only ever heard one viewpoint.
2ArisKatsaris11y
The universe looks very undesigned -- the fine-tuned constants and the like only allow conscious observers and so can be discounted on the basis of the anthropic principle (in a set of near-infinite universes, even undesigned ones, conscious observers would only inhabit universes with constants such that would allow their existence -- there's no observer who'd observe constants that didn't permit their existence) So pretty much all the evidence seems to speak of a lack of any conscious mind directing or designing the universe, neither malicious nor benevolent.
2[anonymous]11y
I know many, many people who think that the universe looks designed. I can refer you to Ivy League scientists if you want.
4ArisKatsaris11y
There are 7 billion people in the world. One can find "many, many" people to believe all sorts of things, especially if one's going to places devoted to gathering such people together. But the stuff that are really created by conscious minds, there's rarely a need to argue about them. When the remnants of Mycenae were discovered nobody (AFAIK) had to argue whether they were a natural geological formation or if someone built them. Nobody had to debate whether the Easter Island statues were designed or not. The universe is either undesigned and undirected, or it's very cleverly designed so as to look undesigned and undirected. And frankly, if the latter is the case, it'd be beyond our ability to manage to outwit such clever designers; in that hypothetical case to believe it was designed would be to coincidentally reach the right conclusion by making all the wrong turns just because a prankster decided to switch all the roadsigns around. There are many, many Ivy League scientists. Again beware confirmation bias, the selection of evidence towards a predetermined conclusion. Do you have statistics for the percentage of Ivy League scientists that say "the universe looks designed" vs the ones that say "the universe doesn't look designed" ? That'd be more useful.
2[anonymous]11y
Aaaand unfortunately we're getting into personal opinion. It's easy enough to find statistics about belief among top scientists, though.
6ArisKatsaris11y
As an addendum to my above comment -- if you personally feel that the universe looks designed, can you tell me how would it look in the counterfactual where you were observing a blatantly UNdesigned universe? Here's for example elements of a hypothetical blatantly designed world: Continents in the shape of animals or flowers. Mountains that are huge statues. Laws of conservation that don't easily reduce to math (e.g. conservation of energy, momentum, etc) but rather to human concepts (conservation of hope, conservation of dramatic irony). Clouds that reshape themselves to amuse and entertain the people watching them.
0Intrism11y
The intuition you're not sharing is that presence is inherently less likely than absence. I'm not entirely sure how to convey that.
3BerryPick611y
What evidence makes you think this?
0[anonymous]11y
I don't have any evidence. I know, downvote me now. But I suspect some sort of Bayesian analysis might support this, because if there is a deity, it is likely to create universes, whereas if there is no deity, universes have to form spontaneously, which requires a lot of things to fall into place perfectly.
6BerryPick611y
Okay, so what makes you think this is true? I'm wondering how on earth we would even figure out how to answer this question, let alone be sure of the answer. What has to fall into place for this to occur? Exactly how unlikely is it?
-2[anonymous]11y
Look, let's just admit that this line of reasoning is entirely speculative anyway...
2BerryPick611y
Um, why cut off the conversation at this point rather than your original one, in that case?
-5[anonymous]11y
2Jack11y
What would be your prior probability for God existing before updating on your own existence?
0[anonymous]11y
I have absolutely no idea. Good question. What would be yours?
9Jack11y
It's not a well-defined enough hypothesis to assign a number to: but the the main thing is that it's going to be very low. In particular, it is going to be lower than a reasonable prior for a universe coming into existence without a creator. The reason existence seems like evidence of a creator, to us, is that we're used to attributing functioning complexity to an agent-like designer. This is the famous Watchmaker analogy that I am sure you are familiar with. But everything we know about agents designing things tells us that the agents doing the designing are always far more complex than the objects they've created. The most complicated manufactured items in the world require armies of designers and factory workers and they're usually based on centuries of previous design work. Even then, they are probably no manufactured objects in the world that are more complex than human beings. So if the universe were designed, the designer is almost certainly far more complex than the universe. And as I'm sure you know, complex hypotheses get low initial priors. In other words: a spontaneous Watchmaker is far more unlikely than a spontaneous watch. Now: an apologist might argue that God is different. That God is in fact simple. Actually, they have argued this and such attempts constitute what I would call the best arguments for the existence of God. But there are two problems with these attempts. First, the way they argue that God is simple is based on imprecise, anthropocentric vocabulary that hides complexity. An "omnipotent, omnipresent, omniscient and omnibenevolent creator" sounds pretty simple. But if you actually break down each component into what it would actually have to be computationally it would be incredibly complex. The only way it's simple is with hand-waving magic. Second, A simple agent is totally contrary to our actual experience with agents and their designs. But that experience is the only thing leading us to conclude that existence is evidence for a des
0Viliam_Bur11y
I agree that the "omnibenevolent" part would be incredibly complex (FAI-complete). But "omnipotent", "omnipresent" and "omniscient" seem much easier. For example, it could be a computer which simulates this world -- it has all the data, all the data are on its hard disk, and it could change any of these data.
2Jack11y
I actually think this illustrates my point quite nicely: the lower limit for the complexity of God (the God you describe) is by definition slightly more complicated than the world itself (the universe is included in your description!).
0JoshuaZ11y
There's quite a bit of evidence against. Absense of expected evidence is evidence of absence.
-1[anonymous]11y
There's also quite a bit of evidence for, if you bother to listen to sincere believers. Which I do.
6Intrism11y
The problem is that "quite a bit" is far, far too little. Though religious people often make claims of religious experience, these claims tend to be quite flimsy and better explained by myriad other mechanisms, including random chance, mental illness, and confirmation bias. Scientists have studied these claims, and thus far well-constructed studies have found them to be baseless.
5JoshuaZ11y
You may be forgetting here that a lot of people here (including myself) grew up in pretty religious circumstances. I'm familiar with all sorts of claims, ranging from teleological arguments, to ontological arguments, to claims of revelation, to claims of mass tradition, etc. etc. So what do you think is "quite a bit of evidence" in this sort of context? Is there anything remotely resembling the Old Testament miracles for example that happens now?
0[anonymous]11y
Yes. They don't casually share them with every skeptic who asks, because miracles are personal, but there is an amazing number of modern miracle stories (among Mormons if not others.) And not just lucky coincidences with easy explanations—real miracles that leave people quite convinced that God is there. And don't be too hasty to dismiss millions of personal experiences as mental illness.
5TheOtherDave11y
I suspect that you and JoshuaZ are unpacking the phrase "Old Testament miracles" differently. Specifically, I suspect they are thinking of events on the order of dividing the Red Sea to allow refugees to pass and then drowning their pursuers behind them. Such events, when they occur, are not personal experiences that must be shared, but rather world-shaking events that by their nature are shared. First of all, Joshua didn't bring up mental illness here. But since you do: how hasty is "too" hasty? To say that differently: in a community of a billion people, roughly how many hallucinations ought I expect that community to experience in a year?
3JoshuaZ11y
Curiously, nearly identical claims are made by other religions also. For example, you see similar statements in the chassidic branches of Judaism. But it isn't at all clear why in this sort of context miracles should be at all "personal" and even then, it doesn't really work. The scale of claimed miracles is tiny compared to those of the Bible. One has things like the splitting of the Red Sea, the collapse of the walls of Jericho, the sun standing still for Joshua, the fires on Mount Carmel, etc. That's the scale of classical miracles, and even the most extreme claims of personal miracles don't match up to that. They aren't all mental illness. Some of them are seeing coincidences as signs when they aren't, and remembering things happening in a more extreme way than they have. Eye witnesses are extremely unreliable. And moreover, should I then take all the claims by devout members of other faiths also as evidence? If so, this seems like a deity that is oddly willing to confuse people. What's the simplest explanation?

I would venture a guess that atheists who haven't put thought into the possibility of there being a god are significantly in the minority. Although there are some who dismiss the notion as an impossibility, or such a severe improbability as to be functionally the same thing, in my experience this is usually a conclusion rather than a premise, and it's not necessarily an indictment of a belief system that a conclusion be strongly held.

Some Christians say that "all things testify of Christ." Similarly, Avicenna was charged with heresy for espousing a philosophy which failed to affirm the self-evidence of Muslim doctrine. But cultures have not been known to adopt Christianity, Islam, or any other particular religion which has been developed elsewhere, independent of contact with carriers of that religion.

If cultures around the world adopted the same religion, independently of each other, that would be a very strong argument in favor of that religion, but this does not appear to occur.

1[anonymous]11y
OK, that works. But what evidence do we have that unambiguously determines that there is no deity? I'd love to hear it. Not just evidence against one particular religion. Active evidence that there is no God, which, rationally taken into account, gives a chance of ~0 that some deity exists.
3Intrism11y
What evidence of no deity could you possibly expect to see? If there were no God, I wouldn't expect there to be any evidence of the fact. In fact, if I were to find the words "There is no God, stop looking" engraved on an atom, my conclusion would not be "There is no God," but rather (ignoring the possibility of hallucination) "There is a God or some entity of similar power, and he's a really terrible liar." Eliezer covers this sort of thing in his sequence entry You're Entitled to Arguments But Not That Particular Proof. If you really want to make this argument, describe a piece of evidence that you would affirmatively expect to see if there were no God.
-3[anonymous]11y
Right, I don't see how there could be any evidence to convince a person to the point of a 0.0001 chance of God. And so when all of these people say that they've concluded that the chance of God is negligible, I think that they're subject to a strong cognitive bias worsened by the fact that they're supposed to be immune to those.
7Prismattic11y
Two things that your perpsective appears to be missing here: 1) Lots of people here were raised in religious families; they didn't start out privileging atheism. (Or they aren't atheists per se; I'm agnostic between atheism and deism; it's just the anthropomorphic interventionist deity I reject.) 2) You aren't the first believer to come here and present the case you are trying to make. See, for example, the rather epic conversation with Aspiringknitter here. You aren't even the first Mormon to make the case here. Calcsam has been quite explicit about it. Note that both of those examples are people who've accumulated quite a bit of karma on LessWrong. People give them a fair hearing. They just don't agree that their arguments are compelling.
-2[anonymous]11y
Thank you for pointing out perceived fundamental flaws. It's so much more helpful than disputing technical details. 1) I know that. However, I would guess that most people here have fully privileged atheism since the time they started considering themselves rationalists, and this is a big difference. 2) I was aware of that too; however, thanks for the specific links. I certainly got on here loudly proclaiming that I was religious; however, my original stated purpose was not to start an argument. That said, I really was asking for it, and when people argued, I argued back. Where I live it's so hard to find people willing to have an intellectual debate about this sort of thing. So if I did something "taboo," I apologize. But the reaction I've gotten suggests that people are interested in what I've said, and so my thoughts were worth something at least. I suppose that when this thread resolves itself I'll make a grand post on the welcome page just like AspiringKnitter did.
4Prismattic11y
Let me see if I can explain my objection to (1) a different way. Rationalists do not privilege atheism. They privilege parsimony. This is basically a tautology. The only way to subscribe to both rationality and theistic religion is compartmentalization. Saying you want to be rational and a theist is equivalent to saying you want to make a special exception to the principles you follow in every other situation when the subject of God comes up. That's going to take a particular kind of strong argument.
-3[anonymous]11y
You're telling me that it's essentially impossible to be theist and fully rational. You're saying that not only do rationalists privilege atheism, but if fact they have to follow it by definition, unless they manage to deceive themselves. I disagree with your objection and I believe that it is possible to reconcile rationality and religion.
1Prismattic11y
That is not the case. Observing something for which one can provide no natural explanation is going to cause a rationalist to increase their probability estimate for the supernatural. It's not going to increase it to near certainty, because the mysteriousness of the universe is a fact about the limits of our own understanding, not about the universe, so it's still possible that something we can't explain has natural causes we don't yet have the ability to measure or explain. But it will cause the estimate to rise. And if inexplicable things keep happening, their estimate will keep rising. The question, though, is whether there is anything that could ever cause you to lower your estimate of the probability that your religion is correct. If the answer is no, then you're not being rational right off the bat, and your quest is doomed.
-3[anonymous]11y
What do you mean by compartmentalization, then, if it's not a bad thing? Sounds to me like it's sacrificing internal consistency. That's true. I actively go looking for things that might challenge my faith, and come out stronger because of it. That's partly why I'm here.
3drethelin11y
compartmentalization IS a bad thing if you care about internal consistency and absolute truth. It's a great thing if you want to hold multiple useful beliefs that contradict each other. You might be happier and more productive, as I'm sure many are, believing that we should expect the world to work based on evidence except insofar as it conflicts with your religion, where it should work on faith.
0Eugine_Nier11y
Also premature decompartmentalizing can be dangerous. There are many sets of (at least mostly) true ideas where it's a lot harder to reconcile them then to understand either individually.
1Intrism11y
The problem is that you're not being consistent in your handling of unfalsifiable theories. A lot of what's been brought to the table are Russell's Teapot-type problems and other gods, but I think I can find one that's a bit more directly comparable. I'll present a theory that's entirely unfalsifiable, and has a fair amount of evidence supporting it. This theory is that your friends, family, and everyone you know are government agents sent to trick you for some unclear reason. It's a theory that would touch every aspect of your life, unlike a Russell's Teapot. There's no way to falsify this theory, yet I assume you're assigning it a negligible probability, likely .0001 or even less. To remain consistent with your position on religion, you must either accept that there's a significant chance you're trapped in some kind of evil simulation run by shadowy G-Men, or accept that the impossibility of counterevidence isn't actually a good argument in favor of something. (Which still wouldn't mean that you'd have to turn atheist - as you've mentioned, there is some evidence for religion, even if the rest of us think it's really terrible evidence.)
-1[anonymous]11y
First of all, in an intellectual debate, you don't go around telling someone that they're cornered. That ought to raise all sorts of red flags as to your logic, but in fact I'm perfectly happy to accept both of those propositions. 1. I would quite agree that there's a chance worth considering that I'm the center of a government conspiracy. (It's got a name.) I don't have any idea how that chance actually ranks in my mind, and any figure I did give would be a Potemkin (a complete guess). But it's entirely possible. 2. However, the fact that it isn't an argument in favor of religion surely doesn't mean that it's an argument in favor of atheism. Jeez. And thank you for admitting that there is at least a tiny bit of evidence for religion. It would be really silly not to.
1Intrism11y
No, my understanding is that it's a fairly typical tactic. Yes, I was indeed thinking of the Truman Show Delusion. My point, though, is that it shouldn't be any less credible than religion to you, meaning that you should be acting on that theory to a similar degree to religion. Counterevidence for atheism is not impossible at all, as people have been saying up and down the thread. If the skies were to open up, and angels were to pour down out of the breach as the voice of God boomed over the landscape... that would most certainly be counterevidence for atheism. (Not conclusive counterevidence, mind. I might be insane, or it could be the work of hyperintelligent alien teenagers. But it would be more than enough evidence for me to convert.) And, in less dramatic terms, a simple well-designed and peer-reviewed study demonstrating the efficacy of prayer would be extremely helpful. There are even those miracles you've been talking about, although (again) most of us consider it poor evidence.
-5[anonymous]11y
1Desrtopa11y
Well, as I linked previously, absence of evidence is evidence of absence. If God were a proposition which did not have low probability in the absence of evidence, then it would be unique in that respect. I'm prepared to argue in favor of the propositions that we do not have evidence favoring God over no God, and that we have no reason to believe that god has uniquely high probability in absence of evidence. Would that satisfy you?
-1[anonymous]11y
This "in the absence of evidence" theme is popping up all over but doesn't seem to be getting anywhere new or useful. I'm going to let it be. And I'm not momentarily interested in a full-blown argument about the nature of the evidence for and against God. I believe there is evidence of God; you believe there is none, which is practically as good as evidence that there is no God. We can talk over each other about that for hours with no one the wiser. I shouldn't be surprised that any debate about this boils down to the evidence—but the nature of the evidence (remember, we've been over this) means that it's really impossible to firmly establish one side or the other.
3Desrtopa11y
Why is that? If god were really communicating and otherwise acting upon people, as you suggest, there's no reason to suppose this should be indistinguishable from brain glitches, misunderstandings, and exaggerations. I think that the world looks much more like we should anticipate if these things are going on in the absence of any real god than we should expect it to look like if there were a real god. You could ask why I think that. A difference of anticipation is a meaningful disagreement to follow up on. You might want to check out this post. The idea that we can't acquire evidence that would promote the probability of religious claims is certainly not one we can take for granted.
-3[anonymous]11y
No thanks, not today at least. I think we just disagree here.
-4Eugine_Nier11y
The same is true of science.
0drethelin11y
if you define "science" as carrying on in the tradition of Bacon, sure. But that didn't stop the greeks from making the antikythera device long before he existed. Astronomy has been independently discovered by druids, mesoamerican cultures, the far east, and countless others where "independent" is more vague. If you consider "science" as a process of invention as well as research and discovery there are also tons of examples in eg http://en.wikipedia.org/wiki/History_of_science_and_technology_in_China#Magnetism_and_metallurgy and so on of inventions that were achieved in vastly different places seemingly independently at different times. Moveable type is still movable type whether invented in China or by Gutenberg. On the other hand, Loki is not Coyote.
2Eugine_Nier11y
A lot of actual pagans may disagree with you. True, there are some differences between the cults involved, there are also differences between Babylonian and Chinese mathematics. (As for your example of Greek science, much of it is on the same causal path that led to Bacon.)

Have most atheists honestly put thought into what if there actually was a God?

Many people here are grew up in religious settings. Eliezer for example comes from an Orthodox Jewish family. So yes, a fair number have given thought to this.

people honestly believe they've been personally contacted by God.

Curiously many different people believe that they've been contacted by God, but they disagree radically on what this contact means. Moreover, when they claim to have been contacted by God but have something that doesn't fit a standard paradigm, or when they claim to have been contacted by something other than God, we frequently diagnose them as schizophrenic. What's the simplest explanation for what is going on here?

-1[anonymous]11y
Simple explanations are good, but not necessarily correct. It's awfully easy to say they're all nutcases, but it's still easy and a bit more fair to say that they're mostly nutcases but maybe some of them are correct. Maybe. I think it's best to give it a chance at least.

It's awfully easy to say they're all nutcases, but it's still easy and a bit more fair to say that they're mostly nutcases but maybe some of them are correct. Maybe. I think it's best to give it a chance at least.

Openmindedness in these respects has always seemed to me highly selective -- how openminded are you to the concept that most thunderbolts may be mere electromagnetic phenomena but maybe some thunderbolts are thrown down by Thor? Do you give that possibility a chance? Should we?

Or is it only the words that current society treats seriously e.g. "God" and "Jesus", that we should keep an open mind about, and not the names that past societies treated seriously?

-1[anonymous]11y
If billions of people think so, then yes, we should. It's not just that our society treats Jesus seriously, it's that millions of people have overwhelming personal evidence of Him. And most of them are not rationalists, but they're not mentally insane either.
1TheOtherDave11y
Is the number of people really all that relevant? I mean, there are over a billion people in the world who identify as believers of Islam, many of whom report personal experiences which they consider overwhelming evidence that there is no God but Allah, and Mahomet is His Prophet. But I don't accept that there is no God but Allah. (And, I'm guessing, neither do you, so it seems likely that we agree that the beliefs of a billion people at least sometimes not sufficient evidence to compel confidence in an assertion.) Going the other way, there was a time when only a million people reported personal evidence of Jesus Christ as Lord. There was a time when only a hundred thousand people had. There was a time when only a thousand people had. Etc. And yet, if Jesus Christ really is Lord, a rationalist wants to believe that even in 13 A.D., when very few people claim to. And if he is not, a rationalist wants to believe that even in 2013 A.D. when billions of people claim to. I conclude that the number of people just isn't that relevant.
0[anonymous]11y
I think that if in 13 A.D. you had asked a rationalist whether some random Nazarene kid was our savior, "almost certainly not" would have been the correct response given the evidence. But twenty years later, after a whole lot of strong evidence came out, that rationalist would have adjusted his probabilities significantly. The number of people who were brought up in something doesn't matter, but given that there are millions if not billions of personal witnesses, I think God is a proposition to which we ought to give a fair chance.
0TheOtherDave11y
And by "God" here you specifically mean God as presented in the Church of Jesus Christ of Latter-Day Saints' traditional understanding of the Book of Mormon, and our collective traditional understandings of the New Testament insofar as they don't contradict each other or that understanding of the Book of Mormon, and our traditional understandings of the Old Testament insofar as they don't contradict each other or any of the above. Yes? But you don't mean God as presented in, for example, the Sufis' traditional understanding of the Koran, and our collective traditional understandings of the New Testament insofar as they don't contradict each other or that understanding of the Koran, and our traditional understandings of the Old Testament insofar as they don't contradict each other or any of the above. Yes? Is this because there are insufficient numbers of personal witnesses to the latter to justify such a fair chance?
-3[anonymous]11y
I mean deity or God in general. Because although they don't agree on the details, these billions of people agree that there is some sort of conscious higher Power. And they don't have to contradict each other in that.
0TheOtherDave11y
Well... hm. Is there sufficient evidence, on your account, to conclude (or at least take very seriously the hypothesis) that Thomas Monson communicates directly with a conscious higher Power in a way that you do not? Is there sufficient evidence, on your account, to conclude (or at least take very seriously the hypothesis) that Sun Myung Moon communicated directly with a conscious higher Power in a way that you do not?
-3[anonymous]11y
I think it's too difficult to take this reasoning into specific cases. That is, with the general reasoning I've been talking about, I'm going to conclude that I think it's best to take the general possibility of deity seriously. Given that, and given my upbringing and personal experience and everything else, I think that it's best to take Thomas Monson very seriously. I hardly know anything about Sun Myung Moon so I can't say anything about him. I can't possibly ask you to do that second part, but I think that the possibility of deity in general is a cause I will fight for. (edit: clarified)
4TheOtherDave11y
I see. So on your account, if I've understood it, I have sufficient evidence to justify a high confidence in a conscious higher Power consistent with the accounts of all believers in Abrahamic religions, though not necessarily identical to that described in any of those accounts, and the fact that I lack such confidence is merely because I haven't properly evaluated the evidence available to me. Yes? Just to avoid confusion, I'm going to label that evidence -- the evidence I have access to on this account -- E1. Going further: on your account, you have more evidence than E1, given your upbringing and personal experience and everything else, and your evidence (which I'll label E2) is sufficient to further justify a high confidence in additional claims, such as Thomas Monson's exceptional ability to communicate with that Power. Yes? And since you lack personal experiences relating to Sun Myung Moon that justify a high confidence in similar claims about him, you lack that confidence, but you don't rule it out either... someone else might have evidence E3 that justifies a high confidence in Sun Myung Moon's exceptional ability to communicate with that Power, and you don't claim otherwise, you simply don't know one way or the other. . Yes? OK, so far so good. Now, moving forward, it's worth remembering that personal experience of an event V is not our only, or even our primary, source of evidence with which to calculate our confidence in V. As I said early on in our exchange, there are many events I'm confident occurred which I've never experienced observing, and some events which I've experienced observing which I'm confident never occurred, and I expect this is true of most people. So, how is that possible? Well, for example, because other people's accounts of an event are evidence that the event occurred, as you suggest with your emphasis on the mystical experiences of millions (or billions) of people as part of E1. Not necessarily compelling evidence, because
-3[anonymous]11y
Yes, actually, that's spot on. Good job and thank you for helping me to figure out my own reasoning. Please continue...
3TheOtherDave11y
OK, good. So, summarizing your account as I understand it and continuing from there: * Consider five propositions G1-G5 roughly articulable as follows: G1: "there exists a conscious higher Power consistent with the accounts A1 of all believers in Abrahamic religions, though not necessarily identical to that described in any particular account in A1" G2: "there exists a conscious higher Power consistent with the accounts A2 of Thomas Monson, where A2 is a subset of A1; any account Antm which is logically inconsistent with A2 is false." G3: "there exists a conscious higher Power consistent with the accounts A3 of Sun Myung Moon, where A3 may or may not be a subset of A1; any account Ansmm which is logically inconsistent with A3 is false." G4: "there exists a conscious higher Power consistent with the accounts A4 of all believers in any existing religion, Abrahamic or otherwise, though not necessarily identical to that described in any particular account in A4" G5: "there exists a conscious higher Power consistent with the accounts A5 of some particular religious tradition R, where A5 is logically inconsistent with A1 and A2." * 2: On your account there exists evidence, E1, such that a SREoE would, upon evaluating E1, arrive at high confidence in G1. Further, I have access to E1, so if I were an SREoE I would be confident in G1, and if I lack confidence in G1 I am not an SREoE. * 3: On your account there exists evidence E2 that similarly justifies high confidence in G2, and you have access to E2, though I lack such access. * 4: If there are two agents X and Y, such that X has confidence that Y is an SREoE and that Y has arrived at high confidence of a proposition based on some evidence, X should also have high confidence in that proposition even without access to that evidence. Yes? (I'm not trying to pull a fast one here; if the above is significantly mis-stating any of what you meant to agree to, pull the brake cord now.) And you approached
-2[anonymous]11y
I don't think I can claim that your rejection of E1 means you are not a SREoE—this community is by far more SR in EE, the way we're talking about it at least, than those who believe G1. I'm not going to go around calling anyone irrational as long as their conclusions do come from a proper evaluation of the evidence. I can't really claim E2 is that much stronger than E1—many people have access to E2 but don't believe G2. What I'm trying to figure out is if this community thinks that any SREoE must necessarily reject G1 (based largely on the inconsistency of E1). I'm not claiming that a SREoE must accept G1 upon being exposed to E1. But assuming I did claim that I was a SREoE and you all weren't...no, I don't know. Because being a SREoE equates almost completely in my mind with being a rationalist in the ideal sense that this community strives for. That doesn't mean everyone here is a SREoE, but most of them appear to be doing their best. I'm curious, though, where else could this logic lead?
5TheOtherDave11y
I get that you're trying to be polite and all, and that's nice of you. Politeness is important, and the social constraints of politeness are a big reason I steered this discussion away from emotionally loaded terms like "rational," "irrational," "God," "faith," etc.in the first place; it's a lot easier to discuss what confidence a SREoE resides in G1 given E1 without getting offended or apologetic or defensive than to discuss whether belief in God is rational or irrational, because the latter formulation carries so much additional cultural and psychological weight. But politeness aside, I don't see how what you're saying can possibly be the case given what you've already agreed to. If E1 entails high confidence in G1, then an SREoE given E1 concludes that G1 is much more likely than NOT(G1), and an agent that does not conclude this is not an SREoE. That's just what it means for evidence to entail a given level of confidence in a conclusion, be it a low level or a high level. Which means that if you're right that I have evidence that entails reasonably high confidence in the existence of God, then my vanishingly low confidence in the existence of God means I'm not being rational on the subject. Maybe that's rude to say, but rude or not that's just what it means for me to have evidence that entails reasonably high confidence in the existence of God. And I get that you're looking for the same kind of politeness in return... that we can believe or not believe whatever we want, but as long as we don't insist it's irrational to conclude from available evidence that God exists, we can all get along. And in general, we're willing to be polite in that way... most of us have stuff in our lives we don't choose to be SREoEs about, and going around harassing each other about it is a silly way to spend our time. There are theists of various stripes on LW, but we don't spend much time arguing about it. But if you insist on framing the discussion in terms of epistemic rationa
-5[anonymous]11y
7Bugmaster11y
I agree. As soon as a theist can demonstrate some evidence for his deity's existence... well, I may not convert on the spot, given the plethora of simpler explanations (human hoaxers, super-powered alien teenagers, stuff like that), but at least I'd take his religion much more seriously. This is why I mentioned the prayer studies in my original comment. Unfortunately, so far, no one managed to provide this level of evidence. For example, a Mormon friend of mine claimed that their Prophet can see the future. I told him that if the Prophet could predict the next 1000 rolls of a fair six-sided die, he could launch a hitherto unprecedented wave of atheist conversions to Mormonism. I know that I personally would probably hop on board (once alien teenagers and whatnot were taken out of the equation somehow). That's all it would take -- roll a die 1000 times, save a million souls in one fell swoop. I'm still waiting for the Prophet to get back to me...
-4[anonymous]11y
This one is a classic Sunday School answer. The God I was raised with doesn't do that sort of thing very often because it defeats the purpose of faith, and knowledge of God is not the one simple requirement for many versions of heaven. It is necessary, they say, to learn to believe on your own. Those who are convinced by a manifestation alone will not remain faithful very long. There's always another explanation. So yes, you're right, God (assuming Mormonism is true for a moment, as your friend does) could do that, but it wouldn't do the world much good in the end.
6JoshuaZ11y
The primary problem with this sort of thing is that apparently God was willing to do full-scale massive miracles in ancient times. So why the change?
2Bugmaster11y
Right, but hopefully this explains one of the reasons why I'm still an atheist. From my perspective, gods are no more real than 18th-level Wizards or Orcs or unicorns; I don't say this to be insulting, but merely to bring things into perspective. There's nothing special in my mind that separates a god (of any kind) from any other type of a fictional character, and, so far, theists have not supplied me with any reason to think otherwise. In general, any god who a priori precludes any possibility of evidence for its existence is a very hard (in fact, nearly impossible) sell for me. If I were magically transported from our current world, where such a god exists, into a parallel world where the god does not exist, how would I tell the difference ? And if I can't tell the difference, why should I care ?
3Desrtopa11y
Well, if in one world, your disbelief results in you going to hell and being tormented eternally, I think that would be pretty relevant. Although I suppose you could say in that case you can tell the difference, but not until it's too late.
1Bugmaster11y
Indeed. I have only one of me available, so I can't afford to waste this single resource on figuring things out by irrevocably dying.
7JoshuaZ11y
Right, simpler explanations start with a higher probability of being correct. And if two explanations for the same data exist, you should assign a high chance to the one that is simpler. Why should one give "it a chance" and what does that mean? Note also that "nutcase" is an overly strong conclusion. Human reasoning and senses are deeply flawed, and very easy to have problems. That doesn't require nutcases. For example, I personally get sleep paralysis. When that occurs, I get to encounter all sorts of terrible things, demons, ghosts, aliens, the Borg, and occasionally strange tentacled things that would make Lovecraft's monsters look tame. None of those things exist- I have a minor sensory problem. The point of using something like schizophrenia is an example is that it is one of the most well-known explanations for the more extreme experiences or belief sets. But the general hypothesis that's relevant here isn't "nutcase" so much as "brain had a sensory or reasoning error, as they are wont to do."
0Bugmaster11y
In this case, "there are no gods" is still the null hypothesis, but (from the perspective of those people) it has been falsified by overwhelming evidence. Some kind of overwhelming evidence coming directly from a deity would convince me, as well; but, so far, I haven't see any (which is why I haven't mentioned it in my post, above). I can't speak for other atheists, but I personally think that it is entirely possible that certain gods exist. For example, I see no reason why the Trimurti (Brahma/Vishnu/Shiva) could not exist in some way. Of course, the probability of their existence is so vanishingly small that it's not worth thinking about, but still, it's possible.
-2[anonymous]11y
I appreciate that you try to keep the possibility open, but I think it's kind of silly to say that there is a possibility, just a vanishingly small one. Mathematically, there's no sense in saying that an infinitesmal is actually any greater than 0 expect for technical reasons—so perhaps you technically believe that the Trimurti could exist, but for all intents and purposes the probability is 0.
1drethelin11y
If you're ruling out infinitesimals then yes, I don't think there's any chance any chance the gods worshipped by humans exist.
1[anonymous]11y
A chance of 0 or effectively 0 is not conducive to a rational analysis of the situation. And I don't think there's enough evidence out there for a probability that small.
9Bugmaster11y
Why not ? What probability would you put on the proposition that the following things exist ? * Tolkien-style Elves * Keebler Elves * Vishnu, the Preserver * Warhammer-style Orcs * Thor, the Thunderer * Chernobog/Bielobog, the Slavic gods of fortune (bad/good respectively) * Unicorns I honestly do believe that all of these things could, potentially, exist.
-1[anonymous]11y
If I really thought about it, I would have to say that there's quite a good chance that somewhere through all the universes there's some creature resembling a Keebler elf.
3Bugmaster11y
All right, so does this mean that living your life as though Keebler Elves did not exist at all would be irrational ? After all, there's a small probability that they do exist...
-2[anonymous]11y
I never called anyone irrational for not believing in elves. I only said that a perfectly rational person would keep the possibility open. Please stop exaggerating my arguments (and those of, for instance, the Book of Mormon) in order to make them easier to dismiss. It's an elementary logical fallacy which I'm finding quite a lot of here.
4Bugmaster11y
You kinda did: In my own personal assessment, the probability of Keebler Elves existing is about the same as the probability of any major deities existing -- which is why I don't spend a lot of time worrying about it. My assessment is not dogmatic, though; if I met a Keebler Elf in person, or saw some reputable photographic evidence of one, or something like that, then I'd adjust the probability upward.
0Prismattic11y
I'd assign a higher probability to Keebler Elves than to an interventionist diety. Keebler Elves don't have issues with theodicy.
0Bugmaster11y
I think it depends on the deity; for example, Thor doesn't have issues with theodicy, either. But, IMO, at this point we're pretty much down to discussing which epsilon is smaller; and in practice, the difference is negligible.
2drethelin11y
What probability do you actually think I should assign? More or less than to me winning the lottery if I buy a ticket? Is winning the lottery an infinitesimally small chance or should I actually consider it?
5JoshuaZ11y
Do you mean to ask this about specifically the religion issue or things in general? Keep in mind, that while policy debates should not be one sided, that's because reality is complicated and doesn't make any effort to make things easy for us. But, hypotheses don't function that way- the correct hypotheses really should look extremely one-sided, because they reflect what a correct description of reality is. So the best arguments for an incorrect hypothesis are by nature going to be weak. But if I were to put on my contrarian arguer hat for a few minutes and give my own personal response, I'd say that first cause arguments are possibly the strongest argument for some sort of deity.
-1[anonymous]11y
It's a good point. Of course, hundreds of years ago, the argument was also pretty one-sided, but that doesn't mean anyone was correct. I also don't think that the argument really is one-sided today, I just think that the two sides manage to ignore each other quite thoroughly. I 'm not expecting this site to house a debate on the possibility of God's existence. Clearly this site is for atheists. I'm asking, is that actually necessary? I suppose you're saying that yes, it is impossible for rationality and religion to coexist, and that's why there are very few theistic rationalists. I'm still not convinced of that. First cause arguments are a strange existential puzzle, depending on the nature of your God. Any thought system that portrays God as a sort of person will run into the same problem of how God came into existence.
9JoshuaZ11y
A rationalist should strive to have a given belief if and only if that belief is true. I want to be a theist if and only if theism is correct. Note also that getting the right answers to these sorts of questions matters far more than some would estimate. If Jack Chick is correct, then most people here (and most of the world) is going to burn in hell unless they are saved. And this sort of remark applies to a great deal of religious positions (less so for some Muslims, most Jews and some Christians but the basic point is true for a great many faiths). In the other direction, if there isn't any protective, intervening deity, then we need to take serious threats to humanity's existence, like epidemics, asteroids, gamma ray bursts, nuclear war, bad AI, nanotech, etc. a lot more seriously, because no one is going to pick up the pieces if we mess up. To a large extent, most LWians see the basics of these questions as well-established. Theism isn't the only thing we take that attitude about. You also won't see here almost any discussion of continental philosophy for example.
-1[anonymous]11y
So is LW for people who think highly rationally, or for atheists who think highly rationally? Are those necessarily the same? If not, where are the rational theists? You're assuming that "no God" is the null hypothesis. Is there a good, rational reason for this? One could just as easily argue that you should be an atheist if and only if it's clear that atheism is correct. Without any empirical evidence either way, is it more likely that there is some sort of Deity or that there isn't?
[-][anonymous]11y130

You're assuming that "no God" is the null hypothesis. Is there a good, rational reason for this? One could just as easily argue that you should be an atheist if and only if it's clear that atheism is correct. Without any empirical evidence either way, is it more likely that there is some sort of Deity or that there isn't?

IMO there's no such thing as a null hypothesis; epistemology doesn't work like that. The more coherent approach is bayesian inference, where we have a prior distribution and update that distribution on seeing evidence in a particular way.

If there were no empirical evidence either way, I'd lean towards there being an anthropomorphic god (I say this as a descriptive statement about the human prior, not normative).

The trouble is that once you start actually looking at evidence, nearly all anthropomorphic gods get eliminated very quickly, and in fact the whole anthropomorphism thing starts to look really questionable. The universe simply doesn't look like it's been touched by intelligence, and where it does, we can see that it was either us, or a stupid natural process that happens to optimize quite strongly (evolution).

So while "some sort of god"... (read more)

7TheOtherDave11y
Neither, really. It's for people who are interested in epistemic and instrumental rationality. There are a number of such folks here who identify as theists, though the majority don't. Can you clarify what you mean by "some sort of Deity"? It's difficult to have a coherent conversation about evidence for X without a shared understanding of what X is.
6Desrtopa11y
In general, it's not rational to posit that anything exists without evidence. Out of the set of all things that could be posited, most do not exist. "Evidence" need not be direct observation. If you have a model which has shown good predictive power, which predicts a phenomenon you haven't observed yet, the model provides evidence for that phenomenon. But in general, people here would agree that if there isn't any evidence for a proposition, it probably isn't true. ETA: see also Absence of evidence is evidence of absence.
-1[anonymous]11y
Certainly. But why is "God" the proposition, and not "no God?"

Because nearly all things that could exist, don't. When you're in a state where you have no evidence for an entity's existence, then odds are that it doesn't exist.

Suppose that instead of asking about God, we ask "does the planet Hoth, as portrayed in the Star Wars movies, exist?" Absent any evidence that there really is such a planet, the answer is "almost certainly not."

If we reverse this, and ask "Does the planet Hoth, as portrayed in the Star Wars movies, not exist?" the answer is "almost certainly."

It doesn't matter how you specify the question, the informational content of the default answer stays the same.

0[anonymous]11y
I don't think that the Hoth argument applies here, because what we're looking for is not just some teapot in some random corner of the univers—it's a God actively involved in our universe. In other words, in God does exist, He's a very big part of our existence, unlike your teapot or Hoth.
6Desrtopa11y
That's a salient difference if his involvement is providing us with evidence, but not if it isn't. Suppose we posit that gravitational attraction is caused by invisible gravity elves, which pull masses towards each other. They'd be inextricably tied up in every part of our existence. But absent any evidence favoring the hypothesis, why should we suspect they're causing the phenomenon we observe as gravity? In order for it to make sense for us to suspect gravity elves, we need evidence to favor gravity elves over everything else that could be causing gravity.
-2[anonymous]11y
I suppose it's fair to say that if our universe was created by a clockmaker God who didn't interfere with our world, then it wouldn't matter to us whether or not He existed. But since there's a lot of reason to think that God does interact with us humans (like, transcripts of His conversations with them), then it does matter.
1Desrtopa11y
Well, I'm willing to discuss the evidence for and against that proposition. Naturally, I would not be an atheist if I thought the weight of evidence was in favor of an interventionist god existing.
-2[anonymous]11y
Naturally. But there have been a lot of debates about which way the evidence points, and none of them seem to have convinced anyone.
0Desrtopa11y
Some of them have certainly convinced people. I've convinced a number of people myself, and I've known plenty of other people who were convinced by debates with other people (or even more often, by observing debates between other people, since it's easier to change your mind when you're not locked in an adversarial debate mindset. This is why it's important not to fall into the trap of thinking of your debate partner as an opponent.) A lot of religious debates are not productive, people tend to go into them very attached to their conclusions, but they're by no means uniformly fruitless.
-2[anonymous]11y
I like debates a lot, and I've very much enjoyed whatever you call this here. But I'm not interested in a full-blown debate here and now, especially since there are about five of you.
-6[anonymous]11y
5JoshuaZ11y
Not really. Bayesian reasoning doesn't have any notion of a null hypothesis. I could just as well have said "I want to be an atheist if and only if atheism is correct". One can talk about the prior probability of a given hypothesis, and that's a distinct issue which quickly gets very messy. In particular, it is extremely difficult to both a) establish what priors should look like and b) not get confused about whether one is taken for granted very basic evidence about the world around us (e.g. its existence). One argument, popular at least here, is that from an Occam's razor standpoint, most deity hypotheses are complicated and only appear simple due to psychological and linguistic issues. I'm not sure how much I buy that sort of argument. But again, it is worth emphasizing that one doesn't need control of the priors except at a very rough level. It may help if you read more on the difference between Bayesian and frequentist approaches. The general approach of LW is primarily Bayesian, whereas notions like a "null hypothesis" are essentially frequentist.
-1[anonymous]11y
You're right that prior probability gets very, very messy. It's a bit too abstract to actually be helpful to us. So, then, all we can do is look at the evidence we do have. You're saying that the argument is one-sided; there is no evidence in favor of theism, at least no good evidence. I agree that there is a lot of bad evidence, and I'm still looking for good evidence. You've said you don't know of any. Thank you. That's what I wanted to know. In general I don't think it's healthy to believe the opposing viewpoint literally has no case.

In general I don't think it's healthy to believe the opposing viewpoint literally has no case.

Do you think that young earth creationists have no substantial case? What about 9/11 truthers? Belief in astrology? Belief that cancer is a fungus(no I'm not making that one up)? What about anything you'll find here?

The problem is that some hypotheses are wrong, and will be wrong. There are always going to be a lot more wrong hypothesis than right ones. And in many of these cases, there are known cognitive biases which lead to the hypothesis type in question. It may help to again think about the difference between policy issues (shouldn't be one-sided), and factual questions (which once one understands most details, should be).

4khafra11y
You cannot escape the necessity of dealing with priors, however messy they are. The available evidence supports an infinite number of hypotheses. How do you decide which ones to consider? That is your prior, and however messy it may be, you have to live with it.
4Desrtopa11y
How legitimate does "most legitimate" have to be? If I thought there were any criticisms sufficiently legitimate to seriously reconsider my viewpoints, I would have changed them already. To the extent that my religious beliefs are different than they were, say, fifteen years ago, it's because I spent a long time seeking out arguments, and if I found any persuasive, I modified my beliefs accordingly. But I reached a point where I stopped finding novel arguments for theism long before I stopped looking, so if there are any arguments for theism that I would find compelling, they see extremely little circulation. The arguments for "theism" which I see the least reason to reject are ones which don't account for anything resembling what we conventionally recognize as theism, let alone religion, so I'm not sure those would count according to the criteria you have in mind.
0[anonymous]11y
I'd be happy to hear what you've got. I can't just ask you to share all of your life-changing experiences, obviously. Having looked for new arguments and not found any good ones is a great position, I think, because then you can be pretty sure you're right. I don't know if I could ever convince myself there are no new arguments, though.
3Desrtopa11y
I'm certainly not convinced that there are no new arguments, but if there were any good arguments, I would expect them to have more currency. If you want to explain what good arguments you think there are, I'd certainly be willing to listen. I don't want to foist all the work here onto you, but honestly, having you just cover what you think are the good arguments would be simpler than me covering all the arguments I can think of, none of which I actually endorse, without knowing which if any you ascribe to.
-1[anonymous]11y
I'm sorry, I can't help you with that. I'm sure that you've done much more research on this than I have. I'm looking for decent arguments because I don't believe all these people who say there aren't any.
0Desrtopa11y
Well, what do you mean by decent? Things I accept as having a significant weight of evidence, or things I can understand how people would see them as convincing, even if I see reasons to reject them myself? In the latter sense, it makes sense to assume that there must be good arguments, because if there weren't arguments that people found convincing, then so much of the world would most likely not be convinced. But in the former sense, it doesn't make sense to assume that there must be good arguments in general, because for practical purposes it means you'd be assuming the conclusion that a god is real, and it makes even less sense to assume that I specifically would have any, because if I did, I wouldn't disbelieve in the proposition that there is a god. One of the things that those of us who're seriously trying to be rational share is that we try to conduct ourselves so that when the weight of evidence favors a particular conclusion, we don't just say "well, that's a good point, and I acknowledge it," we adopt that conclusion. Our positions should represent, not defy, the evidence available to us.
-3[anonymous]11y
This is largely a problem of the nature of each side's evidence. MOst of the evidence in favor of God is quickly dismissed by those who think they're more rational than the rest of humanity, and the biggest piece of evidence I'm being given against God is that there is no evidence for Him (at least none that you guys accept). Absence of evidence is at best a passive, weak argument (which common wisdom would generally reject). And no, I'm not assuming that God is real, I'm simply assuming that there's a non-negligible chance of it. Is that too much to ask?
2TheOtherDave11y
And the same question arises that has been raised several times: how ought I address the evidence from which many Orthodox Jews conclude that Moses was the last true Prophet of YHWH? From which many Muslims conclude that Mahomet was the last true Prophet of YHWH? From which many Christians conclude that Jesus was the last true Prophet of YHWH? From which millions of followers of non-Abrahamic religions conclude that YHWH is not the most important God out there in the first place? Is it not reasonable to address the evidence from which Mormons conclude that Lehi, or Kumenohni, or Smith, or Monson, were/are Prophets of YHWH the same way, regardless of what tradition I was raised in? If skepticism about religious claims is not justified, then it seems to follow naturally that skepticism about religious claims is not justified.
-1[anonymous]11y
It's important to note that in fact, most Muslims and many Christians (I don't know Judaism as well) believe that Moses, Mohammed, and Jesus were all true prophets. They differ in a few details, but the general message is the same. I think it is definitely reasonable to address all of this evidence. One of Thomas Monson's predecessors expressly stated that he believed God truly did appear to Mohammed. I never said I was necessarily skeptical of claims by Jews or Muslims. Some of them must have been brain glitches, just as some claims by Mormons probably are too. But I have no problem accepting that Jews, Muslims, and Christians (maybe even atheists) can all receive divine revelation. As I said before, it's impractical to try to stretch this logic to argue in favor of any one religion. I'm talking about the existence of God in general.
2TheOtherDave11y
FWIW, the form of Judaism I was raised in entails the assertion that Jesus Christ was not the Messiah, so is logically incompatible with most forms of Christianity. That aside, though, I'm content to restrict our discussion to non-sectarian claims; thanks for clarifying that. I've tried to formalize this a little more in a different thread; probably best to let this thread drop here.
-1[anonymous]11y
You're right, silly me, I honestly should have remembered that. Judaism seems less...open...in that way. But I still think that details of the nature of God aside, the general message of each of these religions, namely "la ilaha ila allah," is the same. ("There is no God but God," that is. It's much more elegant in Arabic.) This whole mess is certainly in need of some threads being dropped or relocated. Good idea—where is it?
0TheOtherDave11y
I refer to this thread.
-2[anonymous]11y
Oh yes, it's wonderful thank you.
2Desrtopa11y
Well, if we're mistaken in dismissing the evidence theists raise in support of the existence of gods, then of course, with the weight of evidence in favor of it, it's reasonable to assign a non-negligible probability to it. The important question here is whether the people dismissing the purported evidence in favor are actually correct. Suppose we're discussing the question of how old the earth is. One camp claims the weight of evidence favors the world being about 4.5 billion years old, another claims the weight of evidence favors it being less than 12,000 years old. Each camp has arguments they raise in favor of this point, and the other camp has reasons for rejecting the other camp's claims. At least one of these camps must be wrong about the weight of evidence favoring their position. There's nothing wrong with rejecting purported evidence which doesn't support what its advocates claim it supports. Scientists do this amongst each other all the time, picking apart whether the evidence of their experiments really supports the authors' conclusions or not. You have to do that sort of thing effectively to get science done. As far as I've seen, you haven't yet asked why we reject what you consider to be evidence in favor of an interventionist deity. Why not do that? Either we're right in rejecting it or we're not. You can try to find out which.
-5[anonymous]11y
3TheOtherDave11y
That's a complicated question in general, because "our own way of thinking" is not a unary thing. We spend a lot of time disagreeing with each other, and we talk about a lot of different things. But if you specifically mean atheism in its "it is best to reason and behave as though there are no gods, because the alternative hypotheses don't have enough evidence to justify their consideration" formulation, I think the most legitimate objection is that it may turn out to be true that, for some religious traditions -- maybe even for most religious traditions -- being socially and psychologically invested in that tradition gets me more of what I want than not being invested in it, even if the traditions themselves include epistemically unjustifiable states (such as the belief that an entity exists that both created the universe and prefers that I not eat pork) or false claims about the world (as they most likely do, especially if this turns out to be true for religious traditions that disagree with one another about those claims). I don't know if that's true, but it's plausible, and if it is true it's important. (Not least of which because it demonstrates that those of us who are committed to a non-religious tradition need to do more work at improving the pragmatic value of our social structures.)
3[anonymous]11y
As for atheism, I don't mean those that think religion is good for us and we ought to believe it whether or not it's true. I meant rational thinkers who actually believe God realistically could exist. It's definitely interesting to think about trying to convince yourself to believe in God, or just act that way, but is it possible to actually believe with a straight face?

Well, you asked for the most legitimate criticisms of rejecting religious faith.

Religious faith is not a rational epistemology; we don't arrive at faith by analyzing evidence in an unbiased way.

I can make a pragmatic argument for embracing faith anyway, because rational epistemology isn't the only important thing in the world nor necessarily the most important (although it's what this community is about).

But if you further constrain the request to seeking legitimate arguments for treating religious faith (either in general, or that of one particular denomination) as a rational epistemology, then I can't help you. Analyzing observed evidence in an unbiased way simply doesn't support faith in YHWH as worshiped by 20th-century Jews (which is the religious faith I rejected in my youth), and I know of no legitimate epistemological criticism that would conclude that it does, nor of any other denomination that doesn't have the same difficulty.

Now, if you want to broaden your search to include not only counterarguments against rejecting religious faith of specific denominations, but also counterarguments against rejecting some more amorphous proto-religious belief like "there exis... (read more)

1[anonymous]11y
Thank you for answering my question. If I read it right you're saying "No, it's not possible to reconcile religion and rationality, or at least I can't refer you to any sane person who tried."
6TheOtherDave11y
If I understand what you're using "religion" and "rationality" to mean, then I would agree with the first part. (In particular, I understand you to be referring exclusively to epistemic rationality.) As for the second part, there are no doubt millions of sane people who tried. Hell, I've tried it myself. The difficulty is not in finding one, but rather in finding one who provides you with what you're looking for.
2Qiaochu_Yuan11y
What do you mean by "your own way of thinking" here? I can think of the following possible interpretations: * The way I personally think about things * The way this community thinks about things * Atheism and skepticism in general
-2[anonymous]11y
Any of these, really. It takes incredible strength to recognize flaws in your entire way of thinking, but if anyone can do it, the Rationalists ought to be able to. What I'd really love is a link to someone smart saying "This is why I think the Less Wrong people are all misled, and here are good reasons why." But that's probably too much to expect, even around here.

Okay. This may not be the kind of thing you had in mind, but the way I personally think about things:

  • is probably not focused enough on emotions. I'm not very good at dealing with emotions, either myself or other people's, and I imagine that someone who was better would have very different thoughts about how to deal with people both on the small scale (e.g. interpersonal relationships) and on the large scale (e.g. politics).

  • may overestimate the value of individuals (e.g. in their capacity to affect the world) relative to organizations.

The way this community thinks about things:

  • is biased too strongly in directions that Eliezer finds interesting, which I suppose is somewhat unavoidable but unfortunate in a few respects. For example, Eliezer doesn't seem to think that computational complexity is relevant to friendly AI and I think this is a strong claim.

  • is biased towards epistemic rationality when I think it should be more focused on instrumental rationality. This is a corollary of the first bullet point: most of the Sequences are about epistemic rationality.

  • is biased towards what I'll call "cool ideas," e.g. cryonics or the many-worlds interpretation of quan

... (read more)
4Kawoomba11y
Could you elaborate?
2Qiaochu_Yuan11y
On why Eliezer doesn't seem to think that or why I think that this is a strong claim? We had a brief discussion about this here.
2Nornagest11y
That usually gets you a culture of inconsequential criticism, where you can be as loudly contrarian as you want as long as you don't challenge any of the central shibboleths. This is basically what Eliezer was describing in "Real Weak Points", but it shows up in a lot of places; many branches of the modern social sciences work that way, for example. It gets particularly toxic when you mix it up with a cult of personality and the criticism starts being all about how you or others are failing to live up to the Great Founder's sacrosanct ideals. I'm starting to think it might not be possible to advocate for a coherent culture that's open to changing identity-level facts about itself; you can do it by throwing out self-consistency, but that's a cure that's arguably worse than the proverbial disease. I don't think strength of will is what's missing, though, if anything is.
-3[anonymous]11y
Yes. And that's what I'm unrealistically looking for—not just disagreement, but fundamental disagreement. And by fundamental I don't mean the nature of the Singularity, as central as that is to some. I mean things like "rational thought is better than irrational thought" or "religion is not consistent with rational thought." Even if they're not spoken, they're important and they're there, which means they ought to be up for debate. I mean "ought to" in the sense that the very best, most intellectually open society imaginable would have already debated these and come to a clear conclusion, but would be willing to debate them again at any time if there was reason to do so.
0TheOtherDave11y
What, on your view, constitutes a reason to debate issues about which a community has come to a conclusion? Relatedly, on your view, can the question of whether a reason to debate an issue actually exists or not ever actually be settled? That is, shouldn't the very best, most intellectually open society imaginable on your account continue to debate everything, no matter how settled it seems, because just because none of its members can currently think of a reason to do so is insufficient grounds not to?
-3[anonymous]11y
I think it's safe to end a debate when it's clear to outside observers (these are important) that it's not going anywhere new. An optimal society listens to outsiders as well.
1TheOtherDave11y
OK. Thanks for answering my question.
1[anonymous]11y
These are good, thank you. About epistemic vs. instrumental rationality, though: I had never heard those terms but it seems like a pretty simple difference of what rationality is to be used for. The way I understand it, Less Wrong is quite instrumentally focused. There are many posts as well as sequences (and all of HPMOR) about how to apply rationality to your everyday life, in addition to those dealing only with technical probabilities (like Pascal's Mugging—not realistic). Personally I'm more interested in the epistemic side of things and not a fan of assurances that these sequences will substantially improve your relationships or anything like that. But that's just me.
5wedrifid11y
There are people here who say that kind of thing all the time... whether they are smart and the reasons are actually good is somewhat less certain.
1[anonymous]11y
Right, that's the problem. There are plenty of sites saying why LW is a cult, just as there are plenty of ignorant religion-bashers. I've found many intelligent atheists, and I'm sure that there are rational intellectuals out there who disagree with LW. But where are they?

I've found many intelligent atheists, and I'm sure that there are rational intellectuals out there who disagree with LW. But where are they?

If you mean rational intellectuals who are theists and disagree with LW I cannot help you. Finding those who disagree with LW on core issues is less difficult. Robin Hanson for example. For an intelligent individual well informed of LW culture who advocates theism you could perhaps consider Will Newsome. Although he has, shall we say, 'become more eccentric than he once was' so I'm not sure if that'll satisfy your interest.

1[anonymous]11y
Thanks, I'll look them up.
8Intrism11y
As far as I know, most criticism of LW focuses on its taking certain strange problems seriously, not on atheism. LW has an unusual focus on Pascal-like problems, on artificial intelligence, on acausal trade, on cryonics and death in general, and on Newcomb's Problem. Many of these focuses result in beliefs that other rationalist communities consider "strange." There is also some criticism of Eliezer's position on quantum mechanics, but I'm not familiar enough with that issue to comment on it.
1metatroll11y
I have no sense of what's important, and respond to stimuli that I should just ignore.

Together with Vallinder, I'm working on a paper on wild animal suffering. We decided to poll some experts on animal perception about their views on the likelihood that various types of animals can suffer. It now occurs to me that it might be interesting to compare their responses with those of the LW community. So, if you'd like to participate, click on one of the links below. The survey consists of only five questions and completing it shouldn't take more than a minute.

  • Click here if your year of birth is an even number

  • Click here if your year of bir

... (read more)
5fubarobfusco11y
"Foos can suffer" could mean "all foos can suffer", "the prototypical foo can suffer", or "there exists a foo that can suffer". You might clarify whether "mammals" is meant to include humans and other primates.
1Pablo11y
Thanks. In the cover email we sent to the researchers, we did make it clear that the survey was about suffering in non-human animals, so the statement about mammals should be read as excluding members of our species (but not other primates). As for the alternative interpretations of 'x can suffer', we thought the natural interpretation was 'At least some species in this group can suffer', but I agree that we could have phrased the sentence less ambiguously.
2Pablo11y
Thanks to everyone who participated. The survey is now closed, and the results are here. There is one tab for LessWrong respondents and one tab for expert respondents.
2Prismattic11y
Nitpicking: The set of mammals includes humans.
0[anonymous]11y
Can you clarify this a bit with examples of what you had in mind?

I have a question about linking sequence posts in comment bodies! I used to think it was a nice, helpful thing to do, such as citing your sources and including a convenient reference. But then it struck me that it might come off as patronizing to people that are really familiar with the sequences. Oops. Any pointers for striking a good balance?

Linking old posts helps all of the new readers who are following the conversation; this is probably more important than any effects on the person you're directly responding to.

Always err on the side of littering your comment with extra links. IME, that's more practical and helpful, and I've never personally felt irked when reading posts or comments with lots of links to basic Sequence material.

In most cases, I've found that it actually helps remember the key points by seeing the page again, and helps most arguments flow more smoothly.

There is no balance. It's always better to provide the links.

9Kawoomba11y
If you got a reference in mind, linking it will always be more helpful than not.
2TimS11y
The only failure mode to avoid is implicitly or explicitly stating "Because you haven't read X, your input is not worth considering." There was a time when that was a common failure mode on LW ("Go read the Sequences, then we'll talk"). Less so now.
0TheOtherDave11y
I generally take a moment to think about how relevant the Sequence post is. Most of the time, I conclude that <10% of the post is actually relevant to my point, so I don't bother linking, as it seems like it enormously diffuses what I'm trying to express. (I don't link nominally relevant wikipedia articles for similar reasons.)

Anyone here have experience hiring people on sites like Mechanical Turk, oDesk, TaskRabbit, or Fiverr? What kind of stuff did you hire them to do, and how good were they at doing it? It seems like these services could be potentially quite valuable so I'd like to get an idea of what it's possible to do with them.

MIRI has hired an artist, an LW programmer, and probably some others on oDesk. One person I heard about pays $1/hr people on oDesk to just sit with him on Skype all day and keep him on task.

0Tenoke11y
That's pretty useful. I would definitely be interested to hear what works best and how such arrangements affect productivity.
4niceguyanon11y
I have used Fiverr to hire a professional voice actor to read short messages. For small scripting jobs or Photoshop work, I have always found reddit's r/forhire subreddit useful.
2Benquo9y
I've hired TaskRabbits for the following tasks, with the following levels of success: Drive me from DC to Baltimore and back the next day - perfect & cheap Assemble a Superintelligence owl costume and deliver it to me on the same day, with just a picture and a suggestion for the method - perfect Pick up laundry from my back porch, have it washed, dried, folded, and return in in boxes - perfect Make me an Anki flashcard deck for some faces and names from a business's Our Team page - perfect Data entry - Good, though slow Find me a good haircut place and style - meh Find Toastmasters clubs nearby, schedule times for me to sit in on a meeting - okay, did most of it but the calendar invitations they sent me were in the wrong time zone so the times were off. Find me a Rolfer - tried, but people didn't return their calls. However, I had immediate success when I made calls myself, so I have to wonder how hard they tried. Assemble furniture, put privacy window film on windows - furniture ok, windows no Pack and mail a bunch of books - nope. Took books, brought them back. Cost me time.
0Matt_Simpson11y
Experimental economists use mechanical turk sometimes. At least, were encourage to use it in the experimental economics class I just took.

As a stereotypical twenty-something recent graduate I am lacking in any particular career direction. I've been considering taking various psychometric or career aptitude tests, but have found it difficult to find unbiased reports on their usefulness. Does anyone have any experience or evidence on the subject?

5RomeoStevens11y
imagine your ideal workplace, try to quantify what makes it ideal, and then work backwards. Or just try to make the most money you can since you're young and probably have a high stress tolerance given the lack of stressors elsewhere (children, marriage, housing, health, etc.)

I have looked through this thread, bravely started by ibidem, and I have noticed what seems like a failure mode by all sides. A religious person does not just believe in God, s/he alieves in God, too, and logical arguments are rarely the best way to get through to the relevant alieving circuit in the brain. Oh, they work eventually, given enough persistence and cooperation, but only indirectly. If the alief remains unacknowledged, we tend to come up with logical counterarguments which are not "true rejections". As long as the alief is there, the... (read more)

3Intrism11y
If I were talking to a religious person elsewhere, that would make sense. But, this is LessWrong, and the respectful way to have this discussion here is to depend upon logic and rationalism. Anything else, and in my opinion we'd be talking down to him.
8shminux11y
Sorry, we don't live in a should-universe, either. If your goal is to influence a religious person's perception of his/her faith, you do what it takes to get through, not complain that the other party is not playing by some real or imaginary rules. But hey, feel free to keep talking about logic, rationalism and respect. That's what two-boxers do.
1rocurley11y
Two boxers don't only do wrong things, and it's not obvious this is actually related to two-boxing.
0shminux11y
Two-boxers live in a should-universe, given how they insist on following "logic" over evidence.
-1[anonymous]11y
Interesting. I'd never heard of alief but it's a good way of explaining things. This is partly why I said (somewhere) that I don't think science will ever be able to fully prove this issue one way or the other—religion or lack thereof is necessarily a matter of alief as well as belief, and it's impossible in practice to look at this issue entirely rationally. (I'm sure it's much too late now to claim I never intended to start a debate about religion. Now that there are about fifteen people all arguing against me I don't think I can keep it up, but I sure was asking for it.)
7bartimaeus11y
Remember, your post has (at the time of this comment at least) a score of 4. Subjects that are "taboo" on LessWrong are taboo because people tend to discuss them badly. You asked some legitimate questions, and some people provided you with good responses. If you're willing to consider changing your mind, the next step would be to read the sequences. A lot of what you mention is answered there, such as: Absence of evidence is evidence of absence The Fallacy of Grey (specifically, when you mention that because we don't know the whole truth, we can't objectively evaluate evidence) 0 and 1 are not probabilities This one actually supports what you were saying, where you were entirely right that you can't assign a probability of 0 to the existence of God. But you still don't know if this probability is 0.9, 0.1, 0.01 or 0.0000001. See http://lesswrong.com/lw/ml/but_theres_still_a_chance_right/
-1[anonymous]11y
I've read several of the sequences, and I'm fairly familiar with this community's way of thinking. Everyone is referring me to Absence of Evidence; I think that it's a weak argument in the first place, but it also seems to be the only one a lot of people have.

Everyone is referring me to Absence of Evidence; I think that it's a weak argument in the first place

Do you think it's a weak argument in general, or just a weak argument with respect to religion in particular?

If the former, it would certainly help if you could explain that. If the latter, do you think that religion is a special case with respect to need for evidence, or are you simply arguing that there is evidence available to us? And if the last one, why not discuss that evidence?

0[anonymous]11y
I think it's weak when it's essentially the only argument a person has against religion.

Hardly anyone treats it as the only argument against religion, but for many people here it is a fully sufficient argument. You just need to apply the principle of parsimony (Occam's razor) correctly.

Now a very weak way of applying it is as follows "In the absence of evidence of a deity, a hypothesis of no god is simpler/more parsimonious than the hypothesis that there is a god. So there is no god". If that's what you think we're arguing, I can understand why you think it weak.

However, a much stronger formulation looks like this. "If there were a deity, we would reasonably expect the world to look very different from the way we find it. True, it is possible to hypothesize a deity who intervenes - and fails to intervene - in exactly the right way to create the world that we see, including the various religious beliefs within it. But such a hypothetical being involves so many ad hoc auxiliary hypotheses and wild excuses that it is highly unparsimonious. So we should not believe in such a being".

Here are some examples of the ad hoc hypotheses and excuses needed:

  1. A god creates complex livings beings, but chooses to create them in precisely the one way (evolution by

... (read more)
0Desrtopa11y
I'll point out here that even in America, many theists accept evolution, but most believe in guided evolution, where the deity set the process in motion and then directed the course of evolution to the desired result. This doesn't offer predictions that deviate nearly as much from our observations as the predictions of creationism, but our observations still contain a suspicious number of evolutionary dead ends, do-overs, and failures to use the best available evolutionary mechanisms (why couldn't our evolutionary guide have given us eyes more like squid eyes?)
2shminux11y
I have looked through a bunch of your recent replies, and they exhibit a number of standard cognitive biases worth addressing before you can profitably carry any religion-related discussion. Or any rational discussion for that matter. Learning about the biases and learning to identify them in yourself is an important part of instrumental rationality. After you are at a reasonable discourse level, and have critically examined your epistemology, as most regulars here have done and still do on occasion, you might or might not choose to be a Mormon for religious and/or possibly social reasons. Or you may decide to not open that particular Pandora's box, who knows. But you are not there yet. Your religion-related arguments are of the level of a physics newbie arguing against relativity with the race car-on-a-train idea, or Draco arguing for blood purity in HPMOR. You cannot even understand the arguments presented to you, and so you reject them out of hand.
1[anonymous]11y
Um, I'd be happy to hear all about them. Like, specific biases and examples. It's not much help for me just to be told I'm completely clueless. Keep in mind that I never intended to challenge atheism. I'm not trying to convert anybody, because I know how that would appear. Obviously I have to disagree. I've heard many arguments here that educated me and expanded my understanding, and a few people have said that they agree with points I have made. But if you insist on fixating upon my newness—what specifically would you recommend I read to improve? I've read most of the sequences, and I've been keeping up with general discussion for a few weeks now.
1Desrtopa11y
That doesn't really answer my question. Also, keep in mind that you've deliberately been keeping the discussion away from any actual religion, and focused simply on the question of theism. I think nearly everyone here would have more arguments against all existing religions.
2bartimaeus11y
Absence of Evidence is directly tied to having a probabilistic model of reality. There might be an inferential gap when people refer you to it, because on its own the argument doesn't seem strong. But it's a direct consequence of Bayesian reasoning, which IS a strong argument. (Just to clarify: I didn't mean to accuse you of ignorance, and I sympathize with having everyone spam you with links to the same material, which must be aggravating.)
-1[anonymous]11y
It's certainly an important point, but I think that atheists tend to overuse it. I can't begin to criticize Bayesian reasoning, especially not here.
3fubarobfusco11y
Bayesian probabilistic reasoning is the unique (up to isomorphism) generalization of Aristotelian (two-valued) logic to reasoning about uncertainty. You can't throw it out without inconsistency.
-2[anonymous]11y
I never tried to. I know exactly how Bayes' Theorem is mathematically derived and I won't try to contest that.

EDIT: I am closing analysis on this poll now. Thanks to the 104 respondents.

This is a poll on a minor historical point which came up on #lesswrong where we wondered how obscure some useless trivia was; please do not look up anything mentioned here - knowing the answers does not make you a better person, I'm just curious - and if you were reading that part of the chat, likewise please do not answer.

  1. Do you know what a "holystone" is and is used for?

    [pollid:462]

  2. In this passage:

    "Tu Mu relates a stratagem of Chu-ko Liang, who in 149 BC, w

... (read more)
3Morendil11y
Unsure rather than "yes", but: xrrcvat gur qhfg qbja?
1gwern11y
Lrf.
2ygert11y
FYI, this is a good example of a case where rot13ing doesn't help at all. The instant I glanced at gwern's comment I got what was being said, simply from length considerations. In this case it's more or less OK, as it's not a major spoiler point and one would need to unrot13 Morendil's comment in order to actually get what you were saying "Lrf" about, but had gwern written the comment unrot13ed, I would have gotten exactly the same information from glancing at it. (But maybe other people would not automatically infer the message from, say, the length? For me, it was something perfectly natural that my brain did automatically, but who knows, that might just be my brain. I am curious: do other people's brains also automatically react like that in situations like this?)
0Zaine11y
Yes, and as you might have intentionally hinted, there are ways of expressing the same sentiment with less letters - or the opposite with more.
2Nornagest11y
Regarding question 1: Fhecevfrq gurer jrer fb srj crbcyr gung xarj jung n "ubylfgbar" jnf. V thrff gurer nera'g znal Cngevpx B'Oevra ernqref urer.
0gwern11y
V nz n yvggyr fhecevfrq gbb, ohg vg qbrf znxr n yvggyr frafr - jura jnf gur ynfg gvzr lbh fnj nal anhgvpny nyyhfvbaf be qvfphffvbaf bs, fnl, Ubengvb Ubeaoybjre?
2iconreforged11y
Vs V'z guvaxvat pbeerpgyl, lneqf hfrq gb or ragveryl qveg, fhpu gung vs lbh fcevaxyrq gur lneq jvgu jngre, lbh pbhyq nibvq evfvat qhfg.
0gwern11y
Fhpu jnf gur cbvag bs gurve raqrnibhe, lbh ner pbeerpg.
2Emile11y
I also answered "unsure", and thought it was gb xrrc gur qhfg qbja (ybbxf yvxr V'z gur guveq bar va gung pnfr).
0gwern11y
Vagrerfgvat ubj srj crbcyr xabj vg, vfa'g vg, jura nf sne nf V pna gryy vg'f n cresrpgyl beqvanel cneg bs yvsr va znal pbhagevrf naq unf orra sbe zvyyraavn? Ohg ba gur bgure unaq, vg ybbxf yvxr nyzbfg rirel bar vf trggvat gur 'fnaqrq sybbe' dhrfgvba evtug, juvpu fgevxrf zr nf jrveq orpnhfr lbh jbhyq guvax gung crbcyr jbhyq vasre gung vg ersref gb cnvagvat be pbafgehpgvba be fbzrguvat. V'ir fgnegrq gb jbaqre vs V fperjrq hc gur cbyy ol chggvat gur bgure dhrfgvbaf svefg... V guvax V znl arrq gb qb nabgure cbyy, creuncf ba tjrea.arg, jurer V punatr gur beqre bs gur dhrfgvbaf be znlor nfx bayl gur fnaq dhrfgvba be hfr n qvssrerag dhbgr... Uz.
2Eneasz11y
#2 V nafjrerq hafher orpnhfr gur dhrfgvba jnf nzovthbhf. V nffhzr gurl'er fcevaxyvat jngre gb xrrc gur qhfg qbja - wnavgbevny jbex. Jnf gur dhrfgvba nobhg guvf yvgrenyyl, be nfxvat nobhg gur fvtavsvpnapr bs fubjvat n srj zra qbvat wnavgbevny jbex gb gur rarzl? Orpnhfr V qba'g xabj jung gur fvtavsvpnapr bs gung fbeg bs jbex fcrpvsvpnyyl vf.
0gwern11y
Vg jnf yvgreny. Nf V fnvq, uvfgbevpny gevivn.
2insufferablejake11y
For #2 Fcevaxyvat jngre ba gur tebhaq gb xrrc vg sebz envfvat qhfg?
0gwern11y
Whfg fb.
2insufferablejake11y
Vs V nz ubarfg, gura, V zhfg nqzvg gung gur cenpgvpr vf pbzzba va fbhgu Vaqvn, va gur fznyy gbja naq ivyyntrf. Pbzr penpx bs qnja lbh'yy svaq jbzra fjrrcvat naq jngrevat gur ragenaprf gb gur gurve ubzrf :) Ner lbh jevgvat na rffnl nobhg fbhgu Vaqvn? Gur fnaqrq sybbef naq gur juvgrjnfurq jnyyf ner nyfb erzvaqref bs gur fnzr guvat.
0gwern11y
Lrf, gung jbhyqa'g fhecevfr zr ng nyy. Zl rffnl vfa'g nobhg fbhgu Vaqvn ohg npghnyyl zber nobhg Ratynaq naq Arj Ratynaq (gung'f jurer gur Tbbtyr Obbxf uvgf pbzr sebz sbe "fnaqrq sybbe", fb gung'f jurer gur rffnl tbrf), naq gurer gbb fnaqrq sybbef ner nffbpvngrq jvgu juvgrjnfurq jnyyf. Ner lbh sebz fbhgu Vaqvn naq pna qvfphff guvf, be qb lbh xabj bs nal hfrshy fbheprf? Na Vaqvna rknzcyr gb tb jvgu gur Puvarfr rknzcyr jbhyq or avpr.
0insufferablejake11y
V'z fbeel V cbfgrq zber naq gura qryrgrq vg, V ernyvmrq gung guvf jnf n choyvp sbehz naq V nz cnenabvq nobhg cevinpl. Cyrnfr rznvy zr ng zl yj unaqyr ng tznvy, V'yy or unccl gb nafjre nal dhrfgvbaf lbh unir.
0insufferablejake11y
V'z fbeel V cbfgrq zber naq gura qryrgrq vg, V ernyvmrq gung guvf jnf n choyvp sbehz naq V nz cnenabvq nobhg cevinpl. Cyrnfr rznvy zr ng zl yj unaqyr ng tznvy, V'yy or unccl gb nafjre nal dhrfgvbaf lbh unir.
0gwern11y
V qvqa'g svaq nalguvat rvgure, ohg V qvq qvfpbire fbzrguvat nyzbfg nf tbbq: nccneragyl vg'f fgvyy n yvggyr avpur evghny guvat va Wncna pnyyrq 'hpuvzvmh', naq gurer'f dhvgr n srj cubgbf bs vg bayvar: * uggcf://frpher.syvpxe.pbz/frnepu/?j=nyy&d=hpuvzvmh&z=grkg * uggc://jjj.jbeyq-vafvtugf.pbz/hpuvzvmh-fcevaxyr-jngre-ba-gur-ebnq/ * uggc://jjj.qnaalpubb.pbz/cbfg/ra/1015/Hpuvzvmh.ugzy Ab qvpr ba fnaqrq sybbef gubhtu.
2ArisKatsaris11y
Likewise replied with "unsure" in the 2nd question but my guess was fcevaxyvat jvgu fbzrguvat yvxr fnyg fb gung ab jrrqf jvyy tebj.
0gwern11y
Ab. Naljnl, fnyg jbhyq or sne gbb rkcrafvir & fpnepr va na rneyl Puvarfr pbagrkg gb jnfgr ba gung jura fvzcyr sbbg genssvp naq iruvpyrf jbhyq xvyy nal cynagf gurer.
0Lumifer11y
For question 2, V oryvrir gurl ner jngrevat gur qhfgl tebhaq gb xrrc gur qhfg qbja. Bgurejvfr fjrrcvat whfg envfrf pybhqf bs qhfg vagb gur nve naq vf abg nyy gung hfrshy. For question 3, zl rkcrpgngvba vf gung fbzr uneq sybbe (cbffvoyl pynl be qveg) vf pbirerq jvgu n yvtug ynlre bs fnaq -- fvzvyne gb ubj fnjqhfg jnf hfrq ba sybbef bs chof naq gnireaf. Gung'f abg na bcgvba va gur cbyy, gubhtu :-)
0Qiaochu_Yuan11y
Regarding question 3, V guvax gur pbeerpg nafjre jnf gbb rnfl gb thrff; va cnegvphyne, V nz cerggl fher V thrffrq vg pbeerpgyl jvgubhg rire univat urneq gung grez orsber (ol nanybtl jvgu fnaqrq jbbq).
0gwern11y
V nterr. Gur evtug nafjre vf bofpher naq fubhyq or ng n fvzvyne % nf gur bgure dhrfgvbaf, ohg vg'f jnl uvture; gb zr, guvf fnlf V sbezhyngrq gur dhrfgvba jebat. V'ir orra zrnavat gb eha n frpbaq cbyy guebhtu tjrea.arg gb trg n qvssrerag nhqvrapr bs erfcbaqragf, ohg V unira'g orra noyr gb guvax bs ubj gb nfx gur fnaql-sybbe dhrfgvba pbeerpgyl.

I got a decent smartphone (SGS3) a few days ago and am looking for some good apps for LessWrong-related activities. I am particularly interested in recommendations for lifelogging apps but would look into any other type of recommendations. Also I've rooted the phone.

0mstevens11y
I personally don't get on with Anki but there are many many positive reports.
0Qiaochu_Yuan11y
You mean like "get Chrome so you can browse LW on your phone" or like "get Sleep Cycle and, even if you don't trust its measure of how good your sleep is, you can at least log when you go to sleep and wake up every day"?

Would learning Latin confer status benefits?

I've recently gotten the idea in my head of taking a twelve-week course in introductory Latin, mostly for nerdy linguistic reasons. It occurs to me that learning an idiosyncratic dead language is archetypal signalling behaviour, and this fits in with my observations. The only people I know with any substantial knowledge of the language either come from privileged backgrounds and private education, or studied Classics at university (which also seems to correlate with a privileged background).

A lot of the bonding... (read more)

Would learning Latin confer status benefits?

Some, usually. But there is (almost) no chance that if status is your goal that learning latin is a sane approach for gaining it. Learn something social.

8Qiaochu_Yuan11y
Taboo "status." Who do you want to impress?
4A1987dM11y
It probably depends on where you are, how old you are, and what your social circle is like.

A monthly "Irrational Quotes" thread might be nice. My first pick would be:

Basically, Godel’s theorems prove the Doctrine of Original Sin, the need for the sacrament of penance, and that there is a future eternity.

Samuel Nigro, "Why Evolutionary Theories are Unbelievable."

5A1987dM11y
Previous threads: Anti-rationality quotes and Arational quotes. There have also been A sense of logic and A Kick in the Rationals, though these were not restricted to quotes.

Suppose I have several different points to make in response to a given comment. Do I write all of them in a single comment, or do I write each of them in a separate comment? There doesn't seem to be an universally accepted norm about this -- the former seems to be more common, but there's at least one regular here who customarily does the latter and I can't remember anyone complaining about that.

Advantages of writing separate comments:

  • I can retract each of them individually, in case I change my mind about one of them but still stand by the others (as her
... (read more)
1TimS11y
As a more serious response, I personally try to make one response, unless the commenter is still actively part of the discussion and the discussion has clearly split into two topics. In practice, that tends to weigh very strongly against splitting. One major disadvantage of splitting an active conversation is that interesting points may go into only one branch, and end up missed in the other branch. Especially if one's main method of browsing is clicking the recent comments.
0drethelin11y
case by case seems fine.
-2TimS11y
I'm just enjoying that this post is upvoted for asking a question, by the upvoter did not make any suggestion for the answer. My sense of humor is apparently quite degenerate.
3A1987dM11y
Maybe the upvoter wants my comment to be more visible because they are also interested in other people's opinion on this, but didn't have anything to add to what I said themselves.
0TimS11y
I think you took my comment more seriously than I intended. Anyway, I don't sort by karma because I find it confusing to follow conversations when comments aren't listed in the order made. But I'm not trained by Reddit (or where-ever the sort-by-karma norms are coming from).

Michael Chwe, a game theorist at UCLA, just wrote a book on Jane Austin. It combines game theory and social signaling, so it looks like it'll be on the LW interest spectrum:

Austen’s clueless people focus on numbers, visual detail, decontextualized literal meaning, and social status. These traits are commonly shared by people on the autistic spectrum; thus Austen suggests an explanation for cluelessness based on individual personality traits. Another of Austen’s explanations for cluelessness is that not having to take another person’s perspective is a mar

... (read more)

To whoever implemented this:

Replies to downvoted comments are discouraged. Pay 5 Karma points to proceed anyway?

You win, sir or madam.

I had a small thought the other day. Average utilitarianism appeals to me most it the various utilitarianisms I have seen, but has the obvious drawback of allowing utility to be raised simply by destroying beings with less than average utility.

My thought was that maybe this could be solved by making the individual utility functions permanent in some sense, i. e. killing someone with low utility would still cause average utility to decrease if they would have wanted to live. This seems to match my intuitions on morality better than any other utilitarianism ... (read more)

5Luke_A_Somers11y
You don't evaluate the level of contemporary preference at each future time. You evaluate the current preferences, which are evaluated over the future history of the universe. The people to be slain will likely object to this plan based on these current preferences.
5Nornagest11y
That's even less tractable a problem than summing over the utility functions of all existing agents, but that's not necessarily a game-changer. There are some other odd features of this idea, though: * It only seems to work with preference utilitarianism; pleasure/pain utilitarianism would still treat the painless death of an agent with neutral expected utility as neutral. Fair enough; preference utilitarianism seems less broken than conventional utilitarianism anyway. * Contingent on using preference utilitarianism, certain ways of doing the summing lead to odd features regarding changing cultural values: if future preferences are unbounded in time, a big enough stack of dead ancestors with strong enough preferences could render arbitrary social changes unethical. This could be avoided by summing only over potential lifespan, time-discounting in some way, or using some kind of nonstandard aggregation function that takes new information into account. * Let's say we're now at a point in time . We can plan for using only the preferences of existing or previous agents; all very intuitive so far. But let's say we consider a time further in the future. New agents will have been introduced between and , and there's no obvious way to take their preferences into account; every option gives us potential inconsistencies between optimal actions planned at and optimal actions taken at time . The least bad option seems to be doing a probability-weighted average over agents extant in all possible futures, but (besides being just ridiculously intractable) that seems to introduce some weird acausal effects that I'm not sure I want to deal with. Taking the average at least avoids some of the crazier possible consequences, like the utilitarian go forth and multiply that I'm sure you've thought of already.
0Adele_L11y
Yeah this only makes sense for preference utilitarianism, I should have mentioned that. It is strange to be sure. I wonder what the aggregated preferences of humanity would look like. I wouldn't be to surprised if it ended up being really similar to the aggregated preferences of current humans. Also, adding some sort of EV to this would probably make any issue here go away. But in any case, it seems to be an open problem on how to chose the starting set of utility functions in a moral way. Once things were running, it might work pretty well, especially once death is solved. Why not just plan for whatever the current set of utility functions is? In the context of a FAI, it probably wouldn't want the aggregate utility function to change anyway. But again, deciding which functions to aggregate seems to be unsolved.
0latanius11y
Aren't utility functions kind of... invariant to scaling and addition of a constant value? That is, you can say that "I would like A more than B" but not "having A makes me happier than you would be having it". Neither "I'm neither happy or unhappy, so me not existing wouldn't change anything". It's just not defined. Actually, the only place different people's utility functions can be added up is in a single person's mind, that is, "I value seeing X and Y both feeling well twice as much as just X being in such a state". So "destroying beings with less than average utility" would appeal to those who tend to average utilities instead of summing them. And, of course, it also depends on what they think of those utility functions. (that is, do we count the utility function of the person before or after giving them antidepressants?) Of course, the additional problem is that no one sums up utility functions the same way, but there seems to be just enough correllation between individual results that we can start debates over the "right way of summing utiliity functions".
1Nornagest11y
It's hard to do utilitarian ethics without commensurate utility functions, and so utilitarian ethical calculations, in the comparatively rare cases where they're implemented with actual numbers, often use a notion of cardinal utility. (The Wikipedia article's kind of a mess, unfortunately.) As far as I can tell this has nothing to do with cardinal numbers in mathematics, but it does provide for commensurate utility scales; in this case, you'd probably be mapping preference orderings over possible world-states onto the reals in some way. There do seem to be some interesting things you could do with pure preference orderings, analogous to decision criteria for ranked-choice voting in politics. As far as I know, though, they haven't received much attention in the ethics world.
0Thrasymachus11y
There are probably two stronger objections to average util along the lines you mention. 1) Instead of talking about killing someone with net positive utility, consider bringing someone into existence who has positive utility, but below the world average. It seems intuitive to say that would be good (especially if the absolute levels were really high), yet avutil rules it out. To make it more implausible, say the average is dragged up by blissfully happily aliens outside of our lightcone. 2) Consider a world where there are lives that are really bad, and better off not lived at all. Should you add more lives that are marginally less really bad than those lives that currently exist. Again, intuition says no, but negutil says yes - indeed, you should add as many of these lives as you can, as each subsequent not-quite-as-awful life raises average utility by progressively smaller fractions.
0Pablo11y
I think you meant 'avutil'.

I'd like some comments on the landing page of a website I am working on Experi-org. It is to do with experimenting with organisations.

I mainly want feedback on tone and clarity of purpose. I'll work on cleaning it up more (getting a friend who is a proof reader to give it the once over), once I have those nailed down.

3NancyLebovitz11y
You might be interested in Trust: The Social Virtues and The Creation of Prosperity. More generally, I was a little surprised at the pure experimental approach that didn't have a look at the degree of corruption in different real-world societies. I recommend "From major events like the Enron scandal to low level inefficiency in government, corruption has a massive effect on our day to day lives." As for the next sentence, I'm not sure whether I don't understand you or don't agree with you. Admittedly, there will be more crime when there are weak barriers to crime, but I also believe that people who want to get away with something will, if they have the power, try to shape organizations which will let them get away with what they want. Something to contemplate: Man creates huge Ponzi scheme in EVE Online just to prove he can do it. When it's over, he considers returning the money, which he has no use for, but he just can't make himself do it.
0whpearson11y
Thanks. I'll have a look at the book. I did mention looking at various subjects in the What>Explore section, one of which will be looking current real world societies. I focus on experimentation for a few different reasons 1) Experimentation is hard. You can't do it on your own, you need other people, so the most focus goes on it. Otherwise people might just read books and make observations, which leads to the second point. 2) Experiments are a teaching tool. People have to learn that a different way can be better for them and the best way is to try it out for themselves. 3) There are lots of different societal norms and structures we haven't tried, so their might be opportunities to escape our current local optima. Thanks! I'll change that. I should probably put a qualifying "Most" in front of the people. I was writing it when I was trying to avoid weasel words. But there is the question of why those you think "evil" get power? Who gets power is also somewhat a societal question.

There is an article on impending AI and its socioeconomic consequences in the current issue of Mother Jones.

Karl Smith's reaction sounds rather Hansonian, except he doesn't try to make it sound less dystopian.

Does anyone remember a post (possibly a comment) with a huge stack of links about animal research not transferring to humans?

9gwern11y
It was indeed me. You can find it somewhere, but I copied it over to http://www.gwern.net/DNB%20FAQ#fn95
0NancyLebovitz11y
I've started reading the links. I was interested because I'd seen anti-animal experimentation people say that animal experimentation is unnecessary because we can use computer models. I concluded that these people were nitwits, and assumed that their primary argument must be wrong. Is there a name for that logical fallacy/bias? I'm surprised that a lot of the uselessness seems to come from bad experimental design. I'd assumed the major problem would be that there are significant, non-obvious differences between humans and animals.
0TheOtherDave11y
Well, yes, in that you aren't hallucinating. No, in that I can't find it either on about 3 minutes of googling. I vaguely recall gwern being involved, but may be confabulating.
0NancyLebovitz11y
I was betting on either gwern or lukeprog.

Hi, my name is Jason, this is my first post. I have recently been reading about 2 subjects here, Calibration and Solomoff Induction; reading them together has given me the following question:

How well-calibrated would Solomonoff Induction be if it could actually be calculated?

That is to say, if one generated priors on a whole bunch of questions based on information complexity measured in bits - if you took all the hypotheses that were measured at 10% likely - would 10% of those actually turn out to be correct?

I don't immediately see why Solomonoff Inductio... (read more)

6Viliam_Bur11y
Solomonoff Induction could be well-calibrated across mathematically possible universes. If a hypothesis has a probability 10%, you should expect it to be true in 10% of the universes. Important thing is that Solomonoff priors are just a starting point in our reasoning. Then we update on evidence, which is at least as important as having reasonable priors. If it does not seem well calibrated, that is because you can't get good calibration without using evidence. Imagine that at this moment you are teleported to another universe with completely different laws of physics... do you expect any other method to work better than Solomonoff Induction? Yes, gradually you get data about the new universe and improve your model. But that's exactly what you are supposed to do with Solomonoff priors. You wouldn't predictable get better results by starting from different priors. To me it seems that Occam's Razor is a rule of thumb, and Solomonoff Induction is a mathematical background explaining why the rule of thumb works. (OR: "Choose the most simple hypothesis that fits your data." Me: "Okay, but why?" SI: "Because it is more likely to be the correct one.") You can't get a good "recipe for truth" without actually looking at the evidence. Solomonoff Induction is the best thing you can do without the evidence (or before you start taking the evidence into account). Essentially, the Solomonoff Induction will help you avoid the following problems: * Getting inconsistent results. For example, if you instead supposed that "if I don't have any data confirming or rejecting a hypothesis, I will always assume its prior probability is 50%", then if I give you two new hypotheses X and Y without any data, you are supposed to think that p(X) = 0.5 and p(Y) = 0.5, but also e.g. p(X and Y) = 0.5 (because "X and Y" is also a hypothesis you don't have any data about). * Giving so extremely low probability to a reasonable hypothesis that available evidence cannot convince you otherwise. Fo
0MedicJason11y
Thank you for your reply. It does clear up some of the virtues of SI, especially when used to generate priors absent any evidence. However, as I understand it, SI does take into account evidence - one removes all the possibilities incompatible with the evidence, then renormalizes the probablities of the remaining possibilities. Right? If so, one could still ask - after taking account of all available evidence - is SI then well-calibrated? (At some point it should be well-calibrated, right? More calibrated than human beings. Otherwise, how is it useful? Or why should we use it for induction?) Essentially the theory seems to predict that possible (evidence-compatible) events or states in the universe will occur in exact or fairly exact proportion to their relative complexities as measured in bits. Possibly over-simplifying, this suggests that if I am predicting between 2 (evidence-compatible) possibilities, and one is twice as information-complex as the other, then it should actually occur 1/3 of the time. Is there any evidence that this is actually true? (I can see immediately that one would have to control for the number of possible "paths" or universe-states or however you call it that could lead to each event, in order for the outcome to be directly proportional to the information-complexity. I am ignoring this because the inability to compute this appears to be the reason SI as a whole cannot be computed.) You suggest above that SI explains why Occam's razor works. I could offer another possibility - that Occam's Razor works because it is vague, but that when specified it will not turn out to match how the universe actually works very precisely. Or that Occam's Razor is useful because it suggests that when generating a Map one should use only as much information about the Territory is as is necessary for a certain purpose, thereby allowing one to get maximum usefulness with minimum cognitive load on the user. I am not arguing for one or the other. Instead I
1Pfft11y
Yes. The prediction error theorem states that as long as the true distribution is computable, the estimate will converge quickly to the true distribution. However, almost all the work done here, comes from the conditioning. The proof uses that for any computable mu, M(x) > 2^(-K(mu)) mu(x). That is, M does not assign a "very" small probablility to any possible observation. The exact prior you pick does not matter very much, as long as it dominates the set of all possible distributions mu in this sense. If you have some other distribution P, such that for every mu there is a C with P(x) > C mu(x), you get a similar theorem, differing by just the constant in the inequality. So I disagree with this: It's ok if the prior is not very exact. As long as we don't overlook any possibilities as a priori super-unlikely when they are not, we can use observations to pin down the exact proportions later.
0Viliam_Bur11y
I am not sure about the terminology. I would call the described process "Solomonoff priors, plus updating", but I don't know the official name. I believe the answer is "yes, with enough evidence it is better calibrated then humans". How much would "enough evidence" be? Well, you need some to compensate for the fact that humans are already born with some physiology and instincts adapted by evolution to our laws of physics. But this is a finite amount of evidence. All the evidence that humans get, should be processed better by the hypothetical "Solomonoff prior plus updating" process. So even if the process would start from zero and get the same information as humans, at some moment it should become and remain better calibrated. Let's suppose that there are two hypotheses H1 and H2, each of them predicting exactly the same events, except that H2 is one bit longer and therefore half as likely as H1. Okay, so there is no evidence to distinguish between them. Whatever happens, we either reject both hypotheses, or we keep their ratio at 1:2. Is that a problem? In real life, no. We will use the system to predict future events. We will ask about a specific event E, and by definition both H1 and H2 would give the same answer. So why should we care whether the answer was derived from H1, from H2, or from a combination of both. The question will be: "Will it rain tomorrow?" and the answer will be: "No." That's all, from outside. Only if you try to look inside and ask "What was your model of the world that you used for this prediction?" the machine would tell you about H1, H2, and infinitely many other hypotheses. Then, you could ask it to use Occam's razor to only choose the simplest one and display it to you. But internally, it could keep all of them (we already suppose it has an infinite memory and infinite processing power). Note, if I understand it correctly, that it would be actually impossible for the machine to tell whether in general two hypotheses H1 and H2 are e
0MedicJason11y
Yes, but we already have lots of information about our universe. So, making use of all that, if we could start using SI to, say, predict the weather, would its predictions be well-calibrated? (They should be - modern weather predictions are already well-calibrated, and SI is supposed to be better than how we do things now.) That would require that, of all predictions compatible with currently known info, ALL of them would have to occur in EXACT PROPORTION to their bit-length complexity. Is there any evidence that this is the case?
0Viliam_Bur11y
I admit I am rather confused here, but here is my best guess: It is not true, in our specific world, that all predictions compatible with the past will occur in exact proportion to their bit-length complexity. Some of them will occur more frequently, some of them will occur less frequently. The problem is, you don't know which ones. Because all of them are compatible with the past, so how could you tell the difference, except by a lucky guess? How could any other model tell the difference, except by a lucky guess? How could you tell which model guessed the difference correctly, except by a lucky guess? So if you want to get the best result on average, assigning the probability according to the bit-length complexity is best.
0MedicJason11y
You quoted me "the theory seems to predict that possible (evidence-compatible) events or states in the universe will occur in exact or fairly exact proportion to their relative complexities as measured in bits [...] if I am predicting between 2 (evidence-compatible) possibilities, and one is twice as information-complex as the other, then it should actually occur 1/3 of the time" then replied "Let's suppose that there are two hypotheses H1 and H2, each of them predicting exactly the same events, except that H2 is one bit longer and therefore half as likely as H1. Okay, so there is no evidence to distinguish between them. Whatever happens, we either reject both hypotheses, or we keep their ratio at 1:2." I am afraid I may have stated this unclearly at first. I meant, given 2 hypotheses that are both compatible with all currently-known evidence, but which predict different outcomes on a future event.
0DaFranker11y
Yes, and the first piece of evidence is rather trivial. For any given law of physics, chemistry, etc. or basically any model of anything in the universe, I can conjure up an arbitrary amount of more and more complicated hypotheses that match the current data, but all or nearly-all of which will fail utterly against new data obtained later. For a very trivial thought experiment / example, we could have an alternate hypothesis which includes all of the current data, with only instructions to the turing machine to print this data. Then we could have another which includes all the current data twice, but tells the turing machine to only print one copy. Necessarily, both of these will fail against new data, because they will only print the old data and halt. We could conjure any infinities of copies similar to this which also contain arbitrary amounts of gibberish right after the old data, gibberish which will be unlikely to match the new data (with probability 1/2^n where n is the length of the new data / gibberish, assuming perfect randomness).
0MedicJason11y
This seems reasonable - it basically makes use of the fact that most statements are wrong, therefore adding a given statement whose truth-value is as-yet-unknown is likely to be wrong. However, that's vague. It supports Occam's Razor pretty well, but does it also offer good evidence that that those likelihoods will manifest in real-world probabilities IN EXACT PROPORTION to the bit-lengths of their inputs? That is a much more precise claim! (For convenience I am ignoring the problem of multiple algorithms where hypotheses have different bit-lengths.)
0DaFranker11y
Nope, and we have no idea where we'd even start on evaluating this precisely because of the various problems relating to different languages. I think this is an active area of research. It does seem though, by observation and inference (heh, use whatever tools you have), that more efficient languages tend to formulate shorter hypotheses that tend to hint at this. There's also been some demonstrations of how well SI works for learning and inferring about a completely unknown environment. I think this was what AIXI was about, though I can't recall specifics.
0DaFranker11y
Viliam_Bur makes a great run-down of what's going on. For a more detailed introduction though, see this post explaining Solomonoff Induction, or perhaps you'd prefer to jump straight to this paragraph (Solomonoff's Lightsaber) that contains an explanation of why shorter (simpler) hypotheses are more likely under Solomonoff Induction. To make the bridge between that and what Viliam is saying, basically, if we consider all mathematically possible universes, then half the universes will start with a 1, and the other half will start with a 0. Then a quarter will start with 11, and another with 10, and so on. Which means that, to reuse the example in the above-linked post, 01001101 (which matches observed data perfectly so far) will appear in 1 out of 256 mathematically-possible universes, and 1000111110111111000111010010100001 (which also matches the data just as perfectly) will only appear in 1 out of 17179869184 mathematically-possible universe. So if we expect to live in one out of all mathematically-possible universe, but we have no idea what properties it has (or if you just got warped to a different universe with different laws of physics), which of the two hypotheses do you want? The one that is true more often, in more of the possible universes, because you're more likely to be in one of those than in one that has the longer, rarer hypothesis. That's the basic simplified logic behind it.
0MedicJason11y
Yes, that was the post I read that generated my current line of questioning. My reply to Viliam_Bur was phrased in terms of probabilities in a single universe, while your post here is in terms of mathematically possible universes. Let me try to rephrase my point to him in many-worlds language. This is not how I originally thought of the question, though, so I may end up a little muddled in translation. Taking your original example, where half of the Mathematically Possible Universes start with 1, and the other half with 0. It is certainly possible to imagine a hypothetical Actual Multiverse where, nevertheless, there are 5 billion universes with 1, and only 5 universes with 0. Who knows why - maybe there is some overarching multiversal law we are unaware of, or may it's just random. The point is that there is no a priori reason the Multiverse can't be that way. (It may not even be possible to say that the multiverse probably isn't that way without using Solomonoff Induction or Occam's Razor, the very concepts under question.) If this were the case, and I were somehow universe-hopping, I would over time come to the conclusion that SI was poorly calibrated and stop using it. This, I think, is basically the many-worlds version of my suggestion to Viliam_Bur. As I said to him, I am not arguing for or against SI, I am just asking knowledge people if there is any evidence that the probablities in this universe, or distributions across the multiverse, are actually in proportion to their information-complexities.
0DaFranker11y
Hmm, I think I see what you mean. Yes, there's no reason for Solomonoff to be well-calibrated in the end, but once we obtain information that most of the universes starting with 0 do not work, that is data against which most of the hypotheses starting with 0 will fail. At this point, brute solomonoff induction will be obviously inefficient, and we should begin using the heuristic of testing almost only hypotheses starting with 1. In fact, we're already doing this: We know for a fact that we live in the subset of universes where the acceleration between two particles is not constant and invariant of distance. So it is known that the simpler hypothesis where gravitational attraction is "0.02c/year times the total mass of the objects" is not more likely than the one where gravitational attraction also depends on distance and angular momentum and other factors, despite the former being much less complex than the latter (or so we presume). There's still murky depths and open questions, such as (IIRC) how to calculate how "long" (see Kolmogorov complexity) the instructions are. Because suppose we build two universal turing machines with different sets of internal instructions. We run Solomonoff Induction on the first machine, and it turns out that 01110101011110101010101111011 is the simplest possible program that will output "110", and by analyzing the language and structure of the machine we learn that this corresponds to the hypothesis "2*3", with the output being "6". Meanwhile, on the second machine, 1111110 will also output "110", and by analyzing it we find out that this corresponds to the hypothesis "6", with the output being "6". On the first machine, to do the hypothesis "6", we must write 101010101111110110101111111110000000111111110000110, which is much more complex than the earlier "2*3" hypothesis, while on the second machine the "2*3" hypothesis is input as 1010111010101111, which is much longer than the "6" hypothesis. Which hypothesis, between "2*3
0Pentashagon11y
If we're considering hypotheses across all mathematically possible universes then why not consider hypotheses across all mathematically possible languages/machines as well?
0Viliam_Bur11y
What weight will we assing to the individual languages/machines? Their complexity... according to what? Perhaps we could make a matrix saying how complex a machine A is when simulated by a machine B, and then find the eigenvalues of the matrix? Must stop... before head explodes...
0DaFranker11y
This is also my intuition as well, though it has to be restricted to turing-complete systems I think. I was under the impression that there was already some active research in this direction, but I've never taken the time to look into that too deeply
0[anonymous]11y
.

Has anyone here heard of Michael Marder and his "Plant Thinking" - there is this book being published by Columbia University which argues that plants need to be considered as subjects with ethical value, and as beings with "unique temporality, freedom, and material knowledge or wisdom." This is not satire. He is a research professor of philosophy at a European university.

http://www.amazon.ca/Plant-Thinking-A-Philosophy-Vegetal-Life/dp/0231161255 and here is a review http://ndpr.nd.edu/news/39002-plant-thinking-a-philosophy-of-vegetal-l... (read more)

In Gender Trouble (1990), Judith Butler

...

accommodates plants' constitutive subjectivity, drastically different from that of human beings, and describes their world from the hermeneutical perspective of vegetal ontology (i.e., from the standpoint of the plant itself)"

...

So, in addition to the "vegetal différance" and "plants' proto-writing" (112) associated with Derrida, we're told that plant thinking "bears a close resemblance to the 'thousand plateaus'" (84) of Deleuze and Guattari. At the same time, plant thinking is "formally reminiscent of Heidegger's conclusions apropos of Dasein" (95),

So it's that kind of book.

Just so everyone is clear: this is the kind of "philosophy" that, in the States or the UK, would be done only at unranked programs or in English departments.

The review literally name checks every figure of shitty continental philosophy.

4gwern11y
It's too bad; a book on what plants might think or what their views might look like - a look which took the project seriously in extrapolating a possible plant civilization and its views and ethics, a colossally ambitious and scientificly-grounded work of SF - could be pretty awesome. But from the sound of that review, it's exactly where Marder falls down.
8NancyLebovitz11y
After contemplating how odd it is that people have a revulsion against weapons which use disease and poison that they don't seem to have against weapons which use momentum and in fact are apt to consider momentum weapons high status, I wondered if there could be sentients with a reversed preference. I think sentient trees could fill the requirement. IIRC, plants modulate their poisons according to threat level.
5[anonymous]11y
Olaf Stapledon's 'Star Maker'. The whole thing is filtered through semi-communist theology, but its a fascinating trek through the author's far-flung ideas about all kinds of creatures and what they could hold in common versus major differences that come from their natures. One of the dozens of races he describes is a race of plant-men on an airless world that locked up all its volatiles in living soup in the deep valleys, they stand at the shore and soak up energy from their star in a meditative trance during the day and do more animal-style activity at night... his writing style is NOT for everyone nor is his philosophy but I heartily enjoyed it.
8gwern11y
Yes! Star Maker is one of the very few books that I'd place up there with Blindsight and a few others in depicting truly alien aliens; and he doesn't do it once but repeatedly throughout the book. It's really impressive how Stapledon just casually scatters around handfuls of jewels that lesser authors might belabor singly throughout an entire book.
2NancyLebovitz11y
That book and Last and First Men and possibly Last and First Men in London are amazing. He's got paragraphs that a normal science fiction writer would flesh out into novels.
6[anonymous]11y
Literally in this case: the events of Last and First Men get mentioned in one paragraph of Star Maker as one race that didn't pan out and wind up becoming part of wider happenings after only lasting 2 billion years.
4drethelin11y
Speaker for the Dead?
0gwern11y
It's been a very long time since I read that, but I don't remember thinking 'how alien!'
1[comment deleted]11y
3MrMind11y
If I'm not mistaken, there have been some study on plant communication and data elaboration from their roots, enough to classify them as at least primitively intelligent. Anyway, since they are in fact living and autonomous being, I don't see why they shouldn't be considered subjects of ethical reflections...
2falenas10811y
If we don't say bacteria need ethical reflections, then it is very unlikely that plants will either.
0MrMind11y
Well, deciding when to stop caring at a certain complexity level is a sort of ethical reflection. Anyway, if we care about humans and animals because they have some sort of thinking life, then if these studies are valid we should start paying attention to plants too. Of course we could simply decide we need to care on some other basis.
0Panic_Lobster11y
We can reasonably say that something has a "thinking life" if it functions as a state machine where 'states' correspond to abstract models of sensory data (patterns in external stimuli). The complexity of the possible mental states is correlated with the complexity (information content) of the sensory data that can be collected and incorporated into models. A cat's brain can be reasonably interpreted as working this way. A nematode worm's 302 neurons probably can't. A plant's root system almost definitely can't. Note that this concept of a "thinking life" or sentience is a much weaker and more inclusive than the concept of "personhood" or sapience.

Stanford University is offering a from-scratch introduction to physics, taught by Leonard Susskind.

This is a notification, not a review, since I've only listened to a few minutes of the first lecture, which is at least intriguing. I'm wondering where Susskind could go with the question of allowable laws of physics.

Has there been an atempt at a RATIONAL! Wizard of Oz? I spontaneously started writing one in dialog form, then realized I would need to scrap it and start over with actual planning if I wanted to keep going. I like this idea, but I'm not sure how motivated I am to go through with it; I'd rather read an existing such fic, if one exists.

3bogus11y
The Wizard of Oz. was originally written as a satirical take on the economic effects of the gold standard, although this important feature of the work has been mostly forgotten nowadays. Once you unpack the allegories, it actully shows quite a lot of rationality and common sense.
6gwern11y
That's debatable: http://en.wikipedia.org/wiki/Political_interpretations_of_The_Wonderful_Wizard_of_Oz#Overview One has to wonder about a successful satire that takes 70 years to be unearthed as part of a convenient way to teach highschool students about history. http://www.halcyon.com/piglet/Populism.htm seems like a fairly convincing rebuttal.
0Matt_Simpson11y
The book, Wicked is based on Wizard of Oz and has some related themes IIRC. (I really didn't like the musical based on the book though. But I might just dislike musicals in general; FWIW I also didn't like the only other musical I've seen in person - Rent.)
[-][anonymous]11y20

There's an argument in the metaethics sequence, to the effect that there are no universally compelling moral arguments. This argument seems to be an important cashed thought (in don't mean that in any pejorative sense) in LW discussions of morality. This argument also seems to me to be faulty. Can anyone help me see what I'm missing?

The argument is from No Universally Compelling Arguments:

Yesterday, I proposed that you should resist the temptation to generalize over all of mind design space. If we restrict ourselves to minds specifiable in a trillion bi

... (read more)

I don't see how your P1 is a statement over all minds, it looks more like a statement over most arguments.

3Qiaochu_Yuan11y
Agreed. P1 is quantifying over arguments, not over minds.
0[anonymous]11y
I see the symmetry between P1 and a universally compelling moral argument in this: they both make a claim about the application of an argument quantifying over all minds in mind-space. The claim EY is refuting is 'For all minds m, m: (moral argument X is compelling)m.' P1 makes the claim 'For all minds m, m:(an argument of the form 'for all minds m:X(m) is unlikely to be true)m.' Is that not right?
1Nisan11y
It looks like your P1 is quantifying twice over the same variable. I don't think that's right.
0[anonymous]11y
Is it? I intended it to only quantify over the non-nested m. Am I committed to quantifying over the nested m as well?
0Nisan11y
Now I'm just confused by your syntax.
0[anonymous]11y
Or, more likely, I am confused by my syntax. If you were to formalize EY's argument, how would you put it?
0Nisan11y
At the risk of prolonging an unproductive thread, I'd say P1 is like P1: For most predicates X: Not (For all minds m: X(m)) This isn't self-refuting.
0[anonymous]11y
Thanks, you're right that this isn't self refuting. But with that P1, the argument seems invalid: P1: For most predicates X: Not (For all minds m: X(m)) P2: UCMAs are X C: Not UMCA is like P1: For most prime numbers n: (odd)n P2: 2 is prime C: 2 is odd Edit: you might think that the conclusion is not that not 'not UMCA' but 'UMCA is unlikely', but this doesn't follow either. I don't know quite how 'most' quantifiers work, but I don't think we can read a probabilistic conclusion off of them. I don't think it follows from the above, for example, that 2 is likely to be odd.
0Nisan11y
Yes, the crucial issue in this conversation is the concept of 'most' and 'probability'. What you can conclude from P1 is that a priori, a randomly selected predicate X probably does not satisfy X(m) for all m. If we had other reasons to believe that X(m) for all m, then we can update our beliefs. Similarly, we expect that a randomly selected prime number n is probably odd; but if we learn the further fact that n=2, then our belief changes.
0[anonymous]11y
So what do you make of this argument then? Suppose I were of the opinion that 2 is an even prime. You come to me with an argument to the effect that I should not believe 2 to be prime because a randomly selected prime number is very, very unlikely to be even. Should I be convinced by that? I may be convinced that in some sense, 2 is unlikely to be even, but I don't think I should accept that 2 is not even, or that the evenness of 2 is questionable. Similarly, suppose someone believes an argument to be universally compelling. It seems to me that EY's argument should be unmoving: granting that it is unlikely for a randomly selected argument to be UC, but theirs is no randomly selected argument. And on DaFranker's reading of this argument, the thesis that a given X is unlikely to hold for of all minds relies on the assumption that for most X's, there is (something like) a 50% chance of its being true of some mind. But certainly a UCMAist won't accept that this is true of UCMA's. UCMA's, they will say, are exactly those X's for which this is not true. The burden may be on them to justify the possibility of such an X, but that fact won't save the argument.
2Nisan11y
As for your first paragraph, well, this is a straightforward application of Bayes' theorem. If you're sure that 2 is even, then learning that 2 was randomly selected from some distribution over primes should not be enough to change your credence very much. As for your second and third paragraphs: Yes, the argument of Eliezer you're talking about doesn't refute the existence of universally compelling arguments; it merely means that you shouldn't believe you have a universally compelling argument unless you have a good reason for believing so. If you think you have a good reason, then you don't have to worry about this argument. There's a very simple argument refuting the existence of universally compelling arguments, and I believe it was stated elsewhere in this thread. It's that argument you have to refute, not this one.
0[anonymous]11y
Please point this out to me if you get a chance, as I haven't noticed it. And thanks for the discussion. I mean that: I can see that this wasn't helpful or interesting for you, but rest assured it was for me, so your indulgence is appreciated.
3Nisan11y
You're welcome! The refutation of universally compelling arguments I was referring to is this one. I see you responded that you're interested in a different definition of "compelling". On the word "compelling", you say This is indeed the meaning of "compelling" that Eliezer uses, and Eliezer's original argument is indeed trivial, which perhaps explains why he spent so few words on it. If you wanted to defend a different claim, that there are arguments that all minds are "rationally committed" to accepting or whatever, then you'd have to begin by operationalizing "committed", "reasons", etc. I believe there's no nontrivial way to do this. In any case the burden is on others to operationalize these concepts in an interesting way.
0[anonymous]11y
Okay, thanks for pointing that out.
0Qiaochu_Yuan11y
Why would you want to formalize the argument?
1[anonymous]11y
That I can't argue with, though it wouldn't follow from that that UCMAs are likely to be false. EDIT: you edited your post, and so my reply doesn't seem to make sense. In answer to your new question, I would say 'I don't, I just want some presentation of the argument on which its validity (or invalidity) is obvious'.
0OrphanWilde11y
UCMA is making a claim about all minds, P1 is making a claim about some undefined subset of all minds. They both talk about "all minds," but only one of them makes a claim -about- all minds. A parallel pair of arguments might be: All squares are rectangles The claim that all squares are rectangles is unlikely to be true of all squares. The first claim is stronger than the second, and requires more proof. The fact that we can in fact prove it is irrelevant, and part of why I chose this example; consider the inverse propositions that all rectangles are squares, and that that claim is unlikely to be true, to see why this is important.
0[anonymous]11y
This is analogous to the conclusion of the above argument, not P1. An analogue to P1 would have to be something like 'Any argument of the form 'for all squares s:(X)s is unlikely to be true.' The question would then be this: does this analogue of P1 count as an argument of the form 's:(X)s'? That is, does it quantify over all squares? You might think it doesn't, since it just talks about arguments. But my point isn't quite that it must count as such an argument, but rather that it must count as an argument of the same form as P2 (whatever that might be). The reason is that P2 is not like 'all squares are rectangles'. If it were, P2 would be a (purportedly) universally compelling moral argument. But P2 is rather the claim that there is such an argument. P2 is 'for all minds m:(Moral Argument X is compelling)m'.
0OrphanWilde11y
I see what you're talking about. My confusion originates in your definition of P2, rather than P1, where I thought the confusion was originated. Suppose two minds, A and B. A has some function for determining truth, let's call it T. Mind B, on the other hand, is running an emulation of mind A, and its truth function is not(T). Okay, yes, this is an utterly pedantic kind of argument, but I think it demonstrates that in -all- of mindspace, it's impossible to have any universally compelling argument, without relying on balancing two infinities (number of possible arguments and number of possible minds) against each other and declaring a winner.
0[anonymous]11y
That sounds pretty good to me, though I think it's an open question whether or not what you're talking about is possible. That is, a UCMA theorist would accuse you of begging the question if you assumed at the outset that the above is a possibility.
0DaFranker11y
It's only an open question insofar as what are considered "minds" and "arguments" remain shrouded in mystery. I'm rather certain that for a non-negligible fraction of all minds, the entire concept of "arguments" is nonsensical. There is, after all, no possible combination of inputs (or "arguments"), that will make the function "SomeMind(): Print 3" output that it is immoral to tube-feed chicken.
0[anonymous]11y
Why are you certain of this?
0DaFranker11y
Because of my experience with programming and working with computation, I find it extremely unlikely that, out of all possible things, the specific way humans conceptualize persuasion and arguments would be a necessary requirement for any "mind" (which I take here as a 'sentient' algorithm in the largest sense) to function. If the way we process these things called "arguments" is not a requirement for a mind, then there almost certainly exists at least one logically-possible mind design which does not have this way of processing things we call "arguments". As another intuition, if we adopt the Occam/Solomonoff philosophy for what is required to have a "mind", then something as complicated as the process of understanding arguments, being affected, influenced or persuaded by them, by running through filters and comparing with prior knowledge and so on until some arguments convince or do not convince... that must be required for all possible minds as a component of an already-complex system called "minds"... sounds extremely much less common in the realm of all possible universes than the universes where simpler minds exist that do not have this property of understanding arguments and being moved by them.
0[anonymous]11y
I don't have any experience with programming at all, and that may be the problem: I just don't see these reasons. To my mind (ha) a mind incapable of processing arguments, which is to say holding reasons in rational relations to each other or connecting premises and conclusions up in justificatory relations or whatever, isn't reasonably called a mind. This may just be a failure of imagination on my part So... Could you explain this? I'm under the impression that being capable of solomonoff induction requires being capable of 1) holding beliefs, 2) making inferences about those beliefs, 3) changing beliefs. Yet this seems to me to be all that is required for 'understanding and being convinced by an argument'.
0DaFranker11y
In my limited experience, UCMA supporters explicitly rejected the assertion that "arguments" and "being convinced by an argument" are equivalent to "evidence" and "performing a bayesian update on evidence". So those three would be enough for evidence and updates, but not enough for argument and persuasion according to my next-best-guess of what they mean by "argument" and "convinced". For one, you need some kind of input system, and some process that looks at this input and connects it to pieces of an internal model, which requires and internal model and some structure that sends signals from the input to the process, and some structure where the process has modification access to other parts of the mind (to form the connections and perform the edits) in some way. Then you need something that represents beliefs, and some weighing or filtering system where the elements of the input are judged (compared to other nodes in the current beliefs) and then evaluated using a bunch of built-in or learned rules (which implies having some rules of logic built-in to the structure of the mind, or the ability to learn such rules, both of which are non-trivial complexity-wise), and then those evaluations organized in a way where it can be concluded whether the argument is sound or not, and the previous judgments of the elements integrated so that it can be concluded whether the premises are also good, and then the mind also requires this result to send a signal to some dynamic process in the brain that modus ponens the whole thing into using the links to the concepts and beliefs to update and edit them to the new values prescribed by the compelling argument. Whew, that's a lot of stuff that we need to design into our mind that seems completely unnecessary for a mind to have sentience, as far as I can tell. I sure hope we don't live in the kind of weird universes where sentience necessarily implies or requires all of the above! Which is where the Occam/SI comes in. All of the ab
0[anonymous]11y
Eh, for the UCMA arguments I'm familiar with, they would be happy to work within the (excellent) Solomonoff framework as long as you allowed for probabilities of 0 and 1. I get that this isn't an unproblematic allowance, but nothing about the math actually requires us to exclude probabilities of 0 and 1 (so far as I understand it). What is necessary? It'll pay off for us to get this on the table.
0DaFranker11y
If we knew exactly, someone would have a nobel for it and the nonperson predicate would be a solved problem by now, along with the Hard Problem of Consciousness and a throng of other things currently puzzling scientists the world over. However, we do have a general idea of the direction to take, with an example here of some of the things involved. There's still the whole debate and questions around the so-called "hard problem of consciousness", but overall it doesn't even seem as if the ability to communicate is required for consciousness or sentience, let alone hold the ability to parse language in a form remotely close to ours or that allows anything akin to an argument as humans are used to the word. But past that point, the argument is no longer about UCMAs, and becomes about morality engines (and whether morality or something akin to it must exist in all minds), consciousness, what constitutes an 'argument' and 'being convinced', and other things humans yet understand so very little about.
0[anonymous]11y
Okay, I see the problem. Let's say this: within the whole of mind-space there is a subset of minds capable of morally-evaluable behavior. For all such minds, the UCMA is true. This may be a tiny fraction, but the UCMAist won't be disturbed by that: no UCMAist would insist that the UCMA is UC for minds incapable of anything relevant to morality. How does that sound?
0DaFranker11y
This sounds like a good way to avoid the heavyweight problems with all the consciousness debates, so it seems like a good idea. However, it retains the problem of defining "morality", which is still unresolved. UCMAists will argue from theories of morality where UC is an element of the theory, while E.Y. already assumes a different metaethics where there is no clear boundaries of human "morality" and where morality-in-the-way-we-understand-it is a feature of humans exclusively, and other things might have things akin to morality that are not morality, and some minds would be able to evaluate moral behaviors without caring about morality in the slightest, while some other minds we might consider morally-important and yet would completely ignore any "UCMA" that would otherwise compel any human.
5falenas10811y
Without going into the details, you could hypothesize a simple mind than automatically rejects any argument. This would by itself prove the No Universally Compelling Arguments theory.
1[anonymous]11y
That would do it, though it may only attack a straw man: the thesis that the, say, categorical imperative is universally compelling is not the thesis that the CI is universally persuasive. Rather, I think the thought is that we are all rationally committed to the CI, whether we know or admit this or not.
4DSherron11y
Taboo compelling and restate. If compelling does not mean persuasive then what does it mean to you? Also taboo "committed" and "rational" - I think there's a namespace conflict over your use of rational and the common Less Wrong usage, so restate using different terms. As a hint, try and imagine what a universally compelling argument would look like. What properties does it have? How do different minds react to understanding it, assuming they are capable of doing so? For bonus points explain what it means to be rationally committed to something (without using those words or synonyms). Also worth noting: P1 is a generalization over statements about minds, not minds.
0[anonymous]11y
Well, we have two options in tabooing 'compelling'. On the one hand, we could mean 'persuasive' where this means something like 'If I sat down with someone, and presented the moral argument to them, they would end up accepting it regardless of their starting view'. This seems to be a bad option, because the claim that 'there are no universally persuasive moral arguments' is trivial. No one (of significance) has ever held the contrary view. So our other option is to take 'compelling' as something like what Kantians say about the CI, namely that every mind is committed to it, whether they accept this or not ('not' out of irrationality). As you say, this leaves us with a lot more tabooing and explaining to do. I'm happy to go on with this, since it's the sort of thing I enjoy, but it is a digression from my (perhaps confused) complaint about EY's argument. The important bit there is just that 'compelling' probably shouldn't be taken in such a way as to make EY's point trivial. .
0DSherron11y
The problem here is that the second option you offer does nothing to explain what a compelling argument is; it just passes the recursive buck onto the word "committed". I know you said you recognize that, but unless we can show that this line of reasoning is coherent (let alone leads to a relevant conclusion, let alone correct) then there's no reason to assume that Eliezer's point isn't trivial in the end. Philosophers have believed a lot of silly things, after all. The only sensible resolution I can come up with is where you take "committed to x" to mean "would, on reflection and given sufficient (accurate) information and a great deal more intelligence, believe x". The problem is that this is still trivially false in the entirety of mindspace. You might, although I doubt it, be able to establish a statement of that form over all humans (I think Eliezer disagrees with me on the likelihood here). You could certainly not establish one given a mindspace that includes both humans and paper clip maximizers.
1[anonymous]11y
If what you're saying is this, then we agree: EY doesn't here present an argument that UCMAs are likely to be false, but he does successfully argue that a certain class of generalizations over mind-space are likely to be false (such as generalizations about what minds will find persuasive) along with the assumption that a UCMA will fall into that class. If that's the line, then I think the argument is sound so far as it goes. UCMA enthusiasts (I am not among them, but I know them well) will not accept the final assumption, but you may be right that the burden is on them to show that UCMA's (whatever 'compelling' is supposed to mean) does not fall into this class. Alternatively, we could just posit that we're only arguing against those people who do accept the assumption, that is those people who do take 'compelling' in UCMA to mean something like 'immediately persuasive', but then we're probably tilting at windmills.
0DSherron11y
I suspect that our beliefs are close enough to each other at this point that any perceived differences are as likely to be due to minor linguistic quibbles as to actual disagreement. Which is to say, I wouldn't have phrased it like you did (had I said it with that phrasing I would disagree) but I think that our maps are closer than our wording would suggest. If anyone who does think they have a coherent definition for UCMA that does not involve persuasiveness (subject to the above taboos) wants to chime in I'd love to hear it. Otherwise, I think the thread has reached its (happy) conclusion.
0[anonymous]11y
I'll give it a shot: an argument is universally compelling if no mind both a) has reasons to reject it, and b) has coherent beliefs. This is to say that a mind can only believe that the argument is false by believing a contradiction.
0DSherron11y
I think this may sound stronger than it actually is, for the same reasons that you can't convince an arbitrary mind who does not accept modus ponens that it is true. More to the point, recall that one rationalist's modus tollens is another's modus ponens. This definition is defeated by any mind who possesses a strong prior that the given UCMA is false, and is willing to accept any and all consequences of that fact as true (even if doing so contradicts mathematical logic, Occam's Razor, Bayes, or anything else we take for granted). This prior is a reason to reject the argument (every decision to accept or reject a conclusion can be reduced to a choice of priors), and since it is willing to abandon all beliefs which contradict its rejection it will not hold any contradictory beliefs. It's worth noting that "contradiction" is a notion from formal logic which not all minds need to hold as true; this definition technically imposes a very strong restriction on the space of all minds which have to be persuaded. The law of non-contradiction (~(A ^ ~A) ) is a UCMA by definition under that requirement, even though I don't hold that belief with certainty. The arbitrary choice of priors, even for rational minds, actually appears to defeat any UCMA definition that does not beg the question. Of course, it is also true that any coherent definition begs the question one way or another (by defining which minds have to be persuaded such that it either demands certain arguments be accepted by all, or such that it does not). Now that I think about it, that's the whole problem with the notion from the start. You have to define which minds have to be persuaded somewhere between a tape recorder shouting "2 + 2 = 5!" for eternity and including only your brain's algorithm. And where you draw that line determines exactly which arguments, if any, are UCMAs. And if you don't have to persuade any minds, then I hesitate to permit you to call your argument "universally compelling" in any conte
1TimS11y
Might we say something like: More colloquially, one property of universally compelling evidence might be that all rational agents must agree on the particular direction a particular piece of evidence should adjust a particular prior.
0DSherron11y
You're just passing the recursive buck over to "rational". Taboo rational, and see what you get out; I suspect it will be something along the lines of "minds that determine the right direction to shift the evidence in every case", which, notably, doesn't include humans even if you assume that there is an objectively decidable "rational" direction. There is no objectively determinable method to determine what the correct direction to shift is in any case; imagine an agent with anti-occamian priors, who believes that because the coin has come up heads 100 times in a row, it must be more likely to come up tails next time. It's all a question of priors.
0TimS11y
I think there is an objectively right direction to shift, given particular priors. Your anti-regularity observer seems to be making a mistake by becoming more confident if he actually sees heads come up next. Also, I edited my post above to fix a notational error.
0[anonymous]11y
You're right that I am committed to denying this, though I would also point out that it does not follow a priori that it is always possible to resolve the state of having contradictory beliefs by rejecting either side of a contradiction arbitrarily. However, in order to deny the above, I must claim that there are some beliefs a mind holds (or is committed to, where this means that these beliefs are deductively provable from what the mind does believe) just in virtue of being a mind. I'll bite that bullet, and claim that there exists a UCMA of this kind. I also think the Law of Non-Contradiction is a UCA, and in fact it's trivially so on my definition, but I think that'll hold up: there are no Bayesian reasons to think that ascribing it a probability of 1 is a problem, and I do think I can defend the claim that evidence against it is a priori impossible (EY's example reasons for doubt in the two articles you cite wouldn't apply in this case). This isn't a problem on my definition of a UCA. My understanding of a UCA (which I think represents an honest to god position, namely Kant's) is consistant with any given mind believing the UCA to be false, perhaps because of reasons like the tape-recorder. Only, such a mind couldn't have consistant beliefs. Remember that my definition of a UCMA isn't 'any mind under any circumstances could always be persuaded'. To attack this view of UCMAs is, I think, to attack a strawman. If we must take UCMAs to be arguments which are universally and actually persuasive for any mind in any circumstance in order to see EY's point (here or elsewhere) as valid, then this is a serious critique of EY.
0DSherron11y
Be very, very cautious assigning probability 1 to the proposition that you even understand what the Law of Contradiction means. How confident are you that logic works like you think it works; that you're not just spouting gibberish even though it seems from the inside to make sense. If you'd just had a major concussion, with severe but temporary brain damage, would you notice? Are you sure? After such damage you might claim that "if bananas then clocks" was true with certainty 1, and feel from the inside like you we're making sense. Don't just dismiss minds you can't empathize with (meaning minds which you can't model by tweaking simple parameters of your self-model) as not having subjective experiences that look, to them, exactly like yours do to you. You already know you're running on corrupted hardware; you can't be perfectly confident that it's not malfunctioning, and if you don't know that then you can't assign probability 1 to anything (on pain of being unable to update later). Again, though, you've defined the subspace of minds which have to be persuaded in a way which defines precisely which statements are UCAs. If you can draw useful inferences on that set of statements then go for it, but I don't think you can. Particularly worth noting is that there's no way any "should" statement can be a UCA because I can have any preferences I want and still fit the definition, but "should" statements always engage with preferences.
0[anonymous]11y
I'm not even 90% sure of that, but I am entirely certain that the LNC is true: suppose I were to come across evidence to the effect that the LNC is false. But in the case where the LNC is false, the evidence against it is also evidence for it. In fact, if the LNC is false, the LNC is provable, since anything is provable from a contradiction. So if its true, it's true, and if it's false, it's true. So it's true. This isn't entirely uncontroversial, there is Graham Priest after all. I'll channel Kant here, cause he's the best UCMAist I know. He would say that almost all 'should' statements involve preferences, but not all. Most 'should' statements are hypothetical: If you want X, do Y. But one, he says, isn't, it's categorical: Do Y. But there's nothing about 'should' statements which a priori requires the input of preferences. It just happens that most of them (all but one, in fact) do. Now, Kant actually doesn't think the UCMA is UC for every mind in mind-space, though he does think it's UC for every mind capable of action. This is just to say that moral arguments are themselves only applicable to a subset of minds in mind-space, namely (what he calls) finite minds. But that's a pretty acceptable qualification, since it still means the UCMA is UC for everything to which morality is relevant.
0DSherron11y
You say you're not positive that you know how logic works, and then you go on to make an argument using logic for how you're certain about one specific logical proposition. If you're just confused and wrong, full stop, about how logic works then you can't be sure of any specific piece of logic; you may just have an incomplete or outright flawed understanding. It's unlikely, but not certain. Also, you seem unduly concerned with pointing out that your arguments are not new. It's not anti-productive, but neither is it particularly productive. Don't take this as a criticism or argument, more of an observation that you might find relevant (or not). The Categorical Imperative, in particular, is nonsense, in at least 2 ways. First, I don't follow it, and have no incentive to do so. It basically says "always cooperate on the prisoner's dilemma," which is a terrible strategy (I want to cooperate iff my opponent will cooperate iff I cooperate). It's hardly universally compelling since it carries neither a carrot nor a stick which could entice me to follow it. Second, an arbitrary agent need not care what other minds do. I could, easily, prefer that a) I maximize paperclips but b) all other agents maximize magnets. These are not instrumental goals; my real and salient terminal preferences are over the algorithms implemented not the outcomes (in this case). I should break the CI since what I want to do and what I want others to do are different. Also, should statements are always descriptive, never prescriptive (as a consequence of what "should" means). You can't propose a useful argument of the sort that says I should do x as a prescription. Rather you have to say that my preferences imply that I would prefer to do x. Should is a description of preferences. What would it even mean to say that I should do x, but that it wouldn't make me happier or fulfill any other of my preferences, and I in fact will not do it? The word becomes entirely useless except as an invective. I d
0[anonymous]11y
Okay, fair enough. You've indulged me quite a ways with the whole UCMA thing, and we finished our discussion of EY's sequence argument, so thanks for the discussion. I've spent some years studying Kant's ethical theory though, so (largely for my own enjoyment) I'd like to address some of your criticisms of the CI in case curiosity provokes you to read on. If not, again, thanks. This conclusion should set off alarm bells: if I told you I'd found a bunch of elementary mistakes in the sequences, having never read them but having discussed them with an acquaintance, you would bid me caution. The issue of incentive is one that Kant really struggles with, and much of his writings on ethics following the publication of the Groundwork for the Metaphysics of Morals (where the CI is introduced) is concerned with this problem. So while on the one hand, you're correct to think that this is a problem for Kant, it's also a problem he spent a lot of time thinking about himself. I just can't do it any justice here, but very roughly Kant thinks that in order to rationally pursue happiness, you have to pursue happiness in such a way that you are deserving of it, and only by being morally good can you deserve happiness. This sounds very unconvincing as read, but Kant's view on this is both sophisticated and shifting. I don't know that he felt he ever had a great solution, and he died writing a book on the importance of our sense of aesthetics and its relation to morality. The CI is not a decision theory, nor is a decision theory a moral theory. It's important not to confuse the two. If you gave Kant the prisoner's dilemma, he would tell you to always defect, because you should always be honest. You would be annoyed, because he's mucking around with irrelevant features of the set up, and he would point out to you that the CI is a moral theory and that the details of the setup matters. The CI says nothing consistant or interesting about the prisoner's dilemma, nor should it. You cou
2DaFranker11y
Oh my, the confusion. First off, the quoted argument was, as far as I can tell, entirely meant as an illustrative abstraction. The culprit here is the devious function X(). Suppose I take the set of all possible logically coherent statements that could be made about any given mind. Within this set, 'X' is any given statement about one mind. X(m) represents whether this given statement is True, False or Undefined / Undecidable for this mind 'm'. For all X1..Xn, for a given mind 'm1', find all the X that are true. Then for all X() for m2, find those that are true. Supposing any given X has 50% probability of being true of any given m, then X1 being true for m1 has probability 0.5, being true for both m1 and m2 has probability 0.25, and so on dividing the odds by 2 for each additional mind for which the conjunctive must hold true. So for any given X, for m1..m(10^12), X has (1 / 2^12) probability of being true if we assume a priori 50% chance of that statement being true. Conversely, for any given X, X(m) will be true for at least one m with 2^12 / (2^12 + 1) probability. The central inference in the argument is that we do not know the structure of 'all possible minds' or of 'all possible arguments', but it is reasonable to believe that the space of all possible minds is sufficiently large and versatile that the subset A() of all possible statements X(), where A() are statements of the form "A(m) is true if mind 'm', when presented with the argument A, will change some specific belief / internal state / thought / behavior to state Y", is not true for all A(m). This latter part of the argument rests mostly on the following reasoning: If there is any given argument A that will convince all currently known minds such that A(m) is true, and all known minds accept the argument, we can almost certainly construct a mind nearly identical to one of these m, but for which the input A is forbidden, or that will self-destruct immediately upon receiving it, or where the firs
0[anonymous]11y
Excellently explained, thank you. The argument you present seems to me to be on the whole reasonable, but it involves two assumptions no UCMA enthusiast I know of would ever accept. And These two assumptions aren't argued for, nor are they attributed to any UCMA enthusiast, so I cant see any reason why she should accept them. Do they seem plausible to you? If so, can you give me reasons to accept them?
0DaFranker11y
This isn't a direct claim of fact, but a flat assumption to simplify illustration and calculations. The same argument extends for any arbitrary probability by showing that mathematically, no matter the probability in question (as long as it is a probability, and not a 0 or 1), as the number of possible minds grows towards infinity, the chance of X being true for all minds keeps decreasing in a similar manner. The hidden assumption behind this, of course, is that I have a high prior that the number of different possible minds is sufficiently high and the probability comparably low enough for this compound probabilistic growth to become critical. Since the number of known different minds already exceeds seven billion and any given random statement of the form "Mind 'm' believes that X is immoral under Y circumstances or context Z" is extremely unlikely to be true for any given mind (and like above, scales to nigh-infinitesimal when conjugating it across all seven billion), I think this hidden assumption is a very reasonable one. An example for clarity: Mind M ( John B Gato ) believes ( Looking at Jello ) is immoral in context ( Five days and seven minutes after every new moon for a period of three hours, or while scratching one's toe. ) Of course, this is an intuition about moral beliefs, not about being-convinced-by-arguments, but it's an intuition about the diversity of ways human minds process arguments that hints at the possible diversity among non-human mind structures. This is my own model / abstraction of Argument - Mind - Belief/Action. If a UCMA supporter does not believe that arguments lead to any change of belief or behavior in a mind once the argument is made to that mind, then that seems to directly contradict the very idea of a universally compelling argument that persuades any mind. So the quote above is a model for "If the mind is compelled by the argument, it will have a certain property which allows the argument to compell it" (this property may
0[anonymous]11y
It seems very unlikely to me that a UCMA enthusiast would grant that a UCMA has in any given case only a fifty percent chance of being UC. So to assume this begs the question against them. It may be that the UCMAist is being silly here, or that the burden is on them to show that things are otherwise, but that's not relevant to the question of the strength of EY's argument against UCMAs. It is, but it's a bit too reasonable: that is, it's unreasonable to think that the UCMAist actually thinks that the UCMA is already explicitly accepted by everyone, or even that everyone could be immediately or in any circumstances persuaded that the UCMA is true. UCMA's on this conception are obviously false, but then EY's argument is wholly trivial. Nor would we need an argument: it is not hard to come up with a single case of moral disagreement, and that's all that would be necessary. But this would be to attack a stawman. The UCMAist is committed to some sense being given to the UC bit, you're right. If we go to an actual UCMAist, like Kant, the explanation looks something like this: People say all sorts of things about their moral beliefs, but no one could have reasons to doubt the UCMA while holding consistant beliefs. This means that in principle, any mind could be persuaded to accept the UCMA, but not any mind under any circumstances. I (Kant) am committed to saying that every mind is so structured that the UCMA is an unavoidable deductive conclusion, not that every mind in every circumstance has or would arrive at the UCMA. So if this is what being a UCMA means: Then yes, UCMA's are impossible. But no one has ever thought otherwise, and it remains open whether something very much like them, namely moral arguments which every possible mind is committed to accepting (whether or not they do accept it) is possible.
0DaFranker11y
No no no. The point of the argument is that it doesn't matter what the probability is. Even if it's not 50%, the dynamics at work still make us end up with and exponentially small probability that something is universally compelling, just with the raw math. The burden is on the UCMAist to show that there are structural reasons why minds must necessarily have certain properties that also happen to coincide with the ability to received, understand, and be convinced by arguments, and also coincide with the specific pattern where at least one specific argument will result in the same understanding and the same resulting conviction for all possible minds. Both of these are a priori extremely unlikely due to certain intuitions about physics and algorithms and due to the mathematical argument Eliezer makes, respectively. I'd require clarification on what is meant by "committed to accepting" here. They accept the argument and change their beliefs, or they do not accept the argument and do not change their beliefs. For either case, they either do this in all situations or only some situations. They may sometimes accept it and sometimes not accept it. The Kant formulation you give seems explicitly about humans, only humans and exclusively humans and nothing else. The whole point of EY's argument against UCMAs is that there are no universally compelling arguments you could make to an AI built in a manner completely alien to humans that would convince the AI that it is wrong to burn your baby and use its carbon atoms to build more paperclips, even if the AI is fully sentient and capable of producing art and writing philosophy papers about consciousness and universally-compelling moral arguments. There's other things I'd say are just wrong about the way this description models minds, but I think that for now I'll stop here until I've read some actual Kant or something.
0[anonymous]11y
Right, but I can't imagine a UMCAist thinking this is a matter of probability. That is, the UMCAist will insist that this is a necessary feature of minds. The burden may be up to them, but that's not EY's argument (its not an argument against UMCA's at all). And I took EY to be giving an argument to the effect that UMCA's are false or at least unlikely. You may be right that EY has successfully argued that if one has no good reasons to believe a UMCA exists, the probability of one existing must be assessed as low. But this isn't a premise the UMCAist will grant, so I don't know what work that point could do. You might be able to argue that, bu that's not the way Kant sees it. Kant is explicit that this applies to all minds in mind-space (he kind of discovered the idea of mind-space, I think). As to what 'committed to accepting' means, you're right that this needs a lot of working out, working out I haven't done. Roughly, I mean that one could not have reasons for denying the UMCA while having consistant beliefs. Kant has to argue that it is structural to all possible minds to be unable to entertain an explicit contradiction, but that's at least a relatively plausible generalization. Still, tall order. On the whole, I entirely agree with you that a) the burden is on the UCMAist, b) this burden has not been satisfied here or maybe anywhere. I just wanted to raise a concern about EY's argument in this post, to the effect that it either begs the question against the UCMAist, or that it is invalid (depending on how it's interpreted). The shortcomings of the UCMAist aren't strictly relevant to the (alleged) shortcomings of EY's anti-UCMAist argument.

Does anyone know the terms for the positions for and against in the following scenario?:

Let's assume you have a one in a million chance of winning the lottery. Despite the poor chance, you pay five dollars to enter, and you win a large sum of money. Was playing the lottery the right choice?

Well, I would call them "expected value" and "hindsight".

Hindsight says, "Because we got a good result, it's all good."

Expected value says, "We got lucky, and cannot expect to get lucky again."

7Richard_Kennaway11y
Rational Inquirer says "The world gave me a surprise. Is there something I can learn from this surprise?"

And then it says. "We learned something about the random variables that led to that lottery draw. This doesn't generalize well."

I don't know if there are terms for the positions, but it seems pretty obvious that this is just a question of how you define "right choice". Not playing the lottery was the choice that seemed to maximize your utility given your knowledge at the time. Playing the lottery was the choice that actually maximized your utility. Which one you decide to call "right" is up to you. I think calling the former right is a little more useful because it describes how to actually make decisions, while the latter is only useful for looking back on decisions and evaluating them.

In decision theory, the "goodness" or "badness" of a decision is divorced from its actual outcome. Buying the lottery ticket was a bad decision regardless of whether you win.

However, don't forget that the value of money doesn't scale linearly with how much utility you assign to it. People tend to forget this. There is no rule that says you have to accept a certain $10 in exchange for a 10% chance at $100; on the contrary, it would be unusual to have a perfectly linear utility function in terms of money.

It's possible that your valuation of $5 is essentially 'nothing,' while your valuation of $1 million is 'extremely high.' If you'll permit me to construct a ridiculous scenario: let's say that you're guaranteed an income of $5 a day by the government, that you have no other way of obtaining steady income due to a disability, and that your living expenses are $4.99 per day. You will never be able to save $1 million; even if you save 1c per day and invest it as intelligently as possible, you will probably never accumulate $1 million. Let's further assume that you will be significantly happier if you could buy a particular house which costs exactly $1 million... (read more)

0NancyLebovitz11y
How does this interact with the idea that rationalists should win?
5moridinamael11y
Since we're talking about probabilistic decision theories, if you consistently make "good decisions" you will still obtain "bad outcomes" some of the time. This should not be cause to start doubting your decision procedure. If you say you are 90% confident, you should be thrilled if you are wrong 10% of the time - it means you're perfectly calibrated. A perfectly rational agent working with incomplete or incorrect information will lose some of the time. The decisions of the agent are still optimal from the agent's frame of reference.
3NoSignalNoNoise11y
Rationalists should follow winning strategies. If you followed a bad strategy and got lucky, that doesn't mean you should keep following it. The relevant question is what strategy you should follow going forward. Asking whether a particular past choice was "right" or "wrong", if the answer has no impact on your future choices seems like a wrong question.
1MrMind11y
Rationalists win more by virtue of having a more accurate model of the world, and clearly this helps only in some domain, while in others only a favorable position in some kind of potential landscape would matter (e.g.: beauty contest). Winning the lottery is one of those cases: buying the ticket is of course bad from a decision theory point of view, but one can always be luck enough to receive a great gain from those bad decisions. In the same way, an irrational person can have a correct belief by virtue of pure luck.
1AlexSchell11y
The "divorce" is logical/conceptual, not evidential. It remains true that "rationalists should win", in the presumed intended sense that rationality wins in expectation, that winning is evidence of rationality, and that we should read the dictum a bit stronger to correct for our tendency to ascribe non-winning to bad luck.
0A1987dM11y
“Should” != “will always”. Once in a while, unlikely things do happen.
8A1987dM11y
More than two different positions, I think that's two different senses of “right”. Once you replace it with “yielding a better expected outcome given what you knew when making the choice” or “yielding a better outcome given what we know now”, people wouldn't actually disagree about anything. (I myself prefer to use “right” with the former meaning.)
3A1987dM11y
(I've seen people using “right” for the former and “lucky” for the latter, and people using “rational” for the former and “right” for the latter.)
4Emile11y
Yet Another Comment Not Answering Your Question .... A lot depends on whether this "large sum of money" is more or less than five million dollars.
7A1987dM11y
I guess that in most ordinary situations the utility of $5M isn't anywhere near 1M times the utility of $5.
0RolfAndreassen11y
Even with positive expected value, you may be better off passing up the bet depending on your tolerance for variance and the local shape of your utility-of-money function.
1latanius11y
You won. Aren't rationalists supposed to be doing that? As far as you know, your probability estimate for "you will win the lottery" (in your mind) was wrong. It is another question how that updates the probability of "you would win the lottery if you played next week", but whatever made you buy that ticket (even though the "rational" estimates voted against it... "trying random things", whatever it was) should be applied more in the future. Of course, the result is quite likely to be "learning lots of nonsense from a measurement error", but you should definitely should update having seen that, and a decision you use for updates causing that decision to be made more in the future is definitely a right one. If I won the lottery, I would definitely spend $5 for another ticket. And eventually you might realize that it's just Omega having fun. (actually, isn't one-boxing the same question?)
0Zaine11y
Playing the lottery was an irrational decision, but was the right choice. The outcome, as stated by moridinamael, is divorced from the decision making processes that went into it. Assuming an unambiguous result that can only either be good or bad, and the most rational choice based upon the evidence then at hand led to a bad outcome, one still made the best (most rational) decision - but, considering the bad result, 'twas the wrong choice. This classification if useful when determining the competency of a leader - they may have been an extremely rational decision maker but made nothing but wrong choices due to poor quality of information. I forget my source - as for the terms, fubarofusco's "Hindsight" fits well, while "Expected Value" does not.

I have one particular project I'd like to work on, that seems like it should be horribly quick and easy--done and out the door in a week. I've tried starting it a number of times, and hit one of the most unpleasant Ugh Fields that squat in mindscape (blah, even essay-related Ugh Fields I broke through well enough to complete several college courses a few times).

I'm considering just paying a competent programmer to do it. I'd probably try finding someone on ODesk, if I/someone else doesn't get to it before then.

The project is a relatively simple image viewi... (read more)

The Virtue of Trick Questions

About thinking like a Slytherin - never take things at face value. Don't answer the surface question, answer the query that motivated the question.

5TheOtherDave11y
So, I was going to disagree with your summary, but after reading the article, I have to qualify that. In situations like the author describes, where I'm trying to sell something (and, yes, interviews qualify), then sure, look for and answer the "deep question".... which might not be a question at all. More generally, in such situations approach every interaction as though your actual goal were to alter your interlocutor's behavior, because, well, that is your actual goal. That being said, I prefer my life when most of my interactions with people don't have the primary goal of altering their behavior.

When is it appropriate to move a post to Main?

When is it appropriate to submit a post to main initially?

0shminux11y
If you are in doubt whether it is appropriate, it isn't. Err on the side of posting to Discussion. When you get 20+ upvotes in Discussion and/or a couple of comments saying that your post is worth promoting to Main.

I initially thought I would really like this article on consiousness after death. I did not. The guy comes off as a complete crackpot, given my understanding of neurobiology. (Although I won't dispute his overall point, nor would many here, I think, that we continue to exist for a bit after we are legally dead.) I would appreciate anyone who is so motivated to look up some things on why a lot of the things he says are completely bogus. I replied to the person who sent me this article with a fairly superficial analysis, but if anyone knows of some solid stu... (read more)

0Zaine11y
Say you want to raise your arm. Your intent will initiate the mental processes required. We don't know how the subjective thinking of ”Raise arm!” initiates cellular processes. Intent may be related to a function of the parietal cortex, but how thinking something initiates cellular process we are unsure of. To this they refer. The brain produces an electromagnetic field. They were hypothesising that the field has a reciprocal effect on the cells that produce it, and this effect is 'consciousness' or whatever our subjective experience communicates to initiate an action. Maybe when we can clone a human brain with green fluorescent protein we'll find out all neurones initiate other neurones, thus we function. We don't know yet. I'd beware of dismissing an expert of a field in which one has no domain expertise - check or ask first. This is the corollary to trusting experts too much.

Has anyone used LifeRPG? It seems interesting but require a download and initial time investment to set up so I'm reluctant to try without recomendation.

[-][anonymous]11y00

Hello, I am a young person who recently discovered Less Wrong, HP:MOR, Yudkowsky, and all of that. My whole life I've been taught reason and science but I'd never encountered people so dedicated to rationality.

I quite like much of what I've found. I'm delighted to have been exposed to this new way of thinking, but I'm not entirely sure how much to embrace it. I don't love everything I've read although some of it is indeed brilliant. I've always been taught to be skeptical, but as I discovered this site my elders warned me to be skeptical of skepticis
... (read more)
[This comment is no longer endorsed by its author]Reply

I notice that most of the innovation in game accessibility (specifically accessibility to the visually impaired) comes from sighted or formerly-sighted developers. I feel like this is a bad thing. I'm not sure why I feel this way, considering that the source of innovation is less important than that it happens. Maybe it's a sort of egalitarian instinct?

(To clarify, I mean innovation in indie games like those in the audiogames.net database. Mainstream console/PC games have so little innovation toward accessibility as to be negligible, so far as I can tell.)

Have you adjusted for (what I assume is) the fact that most game developers are sighted? In fact, have you checked whether there even exist any not-even-formerly-sighted game developers? It seems like that would be a tough row to hoe even by the standards of blind-from-birth life.

That aside, I'm really not seeing the problem here. You're going to complain about people being altruistic towards the visually impaired? Really confused about your thought process.