If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, how you came to identify as a rationalist or how you found us. You can skip right to that if you like; the rest of this post consists of a few things you might find helpful. More can be found at the FAQ.

(This is the fourth incarnation of the welcome thread, the first three of which which now have too many comments. The text is by orthonormal from an original by MBlume.)

A few notes about the site mechanics

Less Wrong comments are threaded for easy following of multiple conversations. To respond to any comment, click the "Reply" link at the bottom of that comment's box. Within the comment box, links and formatting are achieved via Markdown syntax  (you can click the "Help" link below the text box to bring up a primer).
You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it's part of what makes discussions on Less Wrong look different from those anywhere else on the Internet.
However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.) Take note when you're downvoted a lot on one topic, as it often means that several members of the community think you're missing an important point or making a mistake in reasoning— not just that they disagree with you! If you've any questions about karma or voting, please feel free to ask here.
Replies to your comments across the site, plus private messages from other users, will show up in your inbox. You can reach it via the little mail icon beneath your karma score on the upper right of most pages. When you have a new reply or message, it glows red. You can also click on any user's name to view all of their comments and posts.
It's definitely worth your time commenting on old posts; veteran users look through the recent comments thread quite often (there's a separate recent comments thread for the Discussion section, for whatever reason), and a conversation begun anywhere will pick up contributors that way.  There's also a succession of open comment threads for discussion of anything remotely related to rationality.
Discussions on Less Wrong tend to end differently than in most other forums; a surprising number end when one participant changes their mind, or when multiple people clarify their views enough and reach agreement. More commonly, though, people will just stop when they've better identified their deeper disagreements, or simply "tap out" of a discussion that's stopped being productive. (Seriously, you can just write "I'm tapping out of this thread.") This is absolutely OK, and it's one good way to avoid the flamewars that plague many sites.
EXTRA FEATURES:
There's actually more than meets the eye here: look near the top of the page for the "WIKI", "DISCUSSION" and "SEQUENCES" links.
LW WIKI: This is our attempt to make searching by topic feasible, as well as to store information like common abbreviations and idioms. It's a good place to look if someone's speaking Greek to you.
LW DISCUSSION: This is a forum just like the top-level one, with two key differences: in the top-level forum, posts require the author to have 20 karma in order to publish, and any upvotes or downvotes on the post are multiplied by 10. Thus there's a lot more informal dialogue in the Discussion section, including some of the more fun conversations here.
SEQUENCES: A huge corpus of material mostly written by Eliezer Yudkowsky in his days of blogging at Overcoming Bias, before Less Wrong was started. Much of the discussion here will casually depend on or refer to ideas brought up in those posts, so reading them can really help with present discussions. Besides which, they're pretty engrossing in my opinion.

A few notes about the community

If you've come to Less Wrong to discuss a particular topic, this thread would be a great place to start the conversation. By commenting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.
If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation; we can fit many more welcomes onto each thread if fewer of them sprout 400+ comments. (To do this: click "Create new article" in the upper right corner next to your username, then write the article, then at the bottom take the menu "Post to" and change it from "Drafts" to "Less Wrong Discussion". Then click "Submit". When you edit a published post, clicking "Save and continue" does correctly update the post.)
If you want to write a post about a LW-relevant topic, awesome!  I highly recommend you submit your first post to Less Wrong Discussion; don't worry, you can later promote it from there to the main page if it's well-received. (It's much better to get some feedback before every vote counts for 10 karma- honestly, you don't know what you don't know about the community norms here.)
If you'd like to connect with other LWers in real life, we have  meetups  in various parts of the world. Check the wiki page for places with regular meetups, or the upcoming (irregular) meetups page.
There's also a Facebook group.  If you've your own blog or other online presence, please feel free to link it.

If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address. 
Normal_Anomaly 
Randaly 
shokwave 
Barry Cotter

A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.

A list of some posts that are pretty awesome

I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:

More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.

Welcome to Less Wrong, and we look forward to hearing from you throughout the site.

New to LessWrong?

New Comment
850 comments, sorted by Click to highlight new comments since: Today at 10:23 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Hello!

  • Age: Years since 1995
  • Gender: Female
  • Occupation: Student

I actually started an account two years ago, but after a few comments I decided I wasn't emotionally or intellectually ready for active membership. I was confused and hurt for various reasons that weren't Less Wrong's fault, and I backed away to avoid saying something I might regret. I didn't want to put undue pressure on myself to respond to topics I didn't fully understand. Now, after many thousands of hours reading and thinking about neurology, evolutionary psychology, and math, I'm more confident that I won't just be swept up in the half-understood arguments of people much smarter than I am. :)

Like almost everyone here, I started with atheism. I was raised Hindu, and my home has the sort of vague religiosity that is arguably the most common form in the modern world. For the most part, I figured out atheism on my own, when I was around 11 or 12. It was emotionally painful and socially costly, but I'm stronger for the experience. I started reading various mediocre atheist blogs, but I got bored after a couple of years and wanted to do something more than shoot blind fish in tiny barrels. I wanted to build something... (read more)

Welcome to Less Wrong, and I for one am glad to have you here (again)! You sound like someone who thinks very interesting thoughts.

I had to face the fact that mere biology may have systematically biased my half of the population against greatness. And it hurt. I had to fight the urge to redefine intelligence and/or greatness to assuage the pain.

I can't say that this is something that has ever really bothered me. Your IQ is what it is. Whether or not there's an overall gender-based trend in one direction or another isn't going to change anything for you, although it might change how people see you. (If anything, I found that I got more attention as a "girl who was good at/interested in science"...which, if anything, was irritating and made me want to rebel and go into a "traditionally female" field just because I could.

Basically, if you want to accomplish greatness, it's about you as an individual. Unless you care about the greatness of others, and feel more pride or solidarity with females than with males who accomplish greatness (which I don't), the statistical tendency doesn't matter.

I don't want to lose the hope/idealism/inner happiness that makes me

... (read more)

I know that it's not particularly rational to feel more affiliation with women than men, but I do. It's one of the things my monkey brain does that I decided to just acknowledge rather than constantly fight. It's helped me have a certain kind of peace about average IQ differentials. The pain I described in the parent has mellowed. Still, I have to face the fact that if I want to major in, say, applied math, chances are I might be lonely or below-average or both. I wish I had the inner confidence to care about self-improvement more than competition, but as yet I don't.

ETA: I characterize "idealism" as a hope for the future more than a belief about the present.

Still, I have to face the fact that if I want to major in, say, applied math, chances are I might be lonely or below-average or both.

As long as you know your own skills, there is no need to use your gender as a predictor. We use the worse information only in the absence of better information; because the worse information can be still better than nothing. We don't need to predict the information we already have.

When we already know that e.g. "this woman has IQ 150", or "this woman has won a mathematical olympiad" there is no need to mix general male and female IQ or math curves into the equation. (That's only what you do when you see a random woman and you have no other information.)

If there are hundred green balls in the basket and one red ball, it makes sense to predict that a randomly picked ball will be almost surely green. But once you have randomly picked a ball and it happened to be red... then it no longer makes sense to worry that this specific ball might still be green somehow. It's not; end of story.

If you had no experience with math yet, then I'd say that based on your gender, your chances to be a math genius are small. But that's not the situation; you already had some math experience. So make your guesses based on that experience. Your gender is already included in the probability of you having that specific experience. Don't count it twice!

7Bugmaster12y
To be perfectly accurate, any person's chances of being a math genius are going to be small anyway, regardless of that person's gender. There are very few geniuses in the world.
-2Rubix12y
What's true of one apple isn't true of every apple.
6ViEtArmis12y
It is particularly not rational to ignore the effect of your unconscious in your relationships. That fight is a losing battle (right now), so if having happy relationships is a goal, the pursuit of that requires you pay attention. There is almost no average IQ differential, since men pad out the bottom as well. Greater chromosomal genetic variations in men lead to stupidity as often as intelligence. Really, this gender disparity only matters at far extremes. Men may pad out the top and bottom 1% (or something like that) in IQ, but applied mathematicians aren't all top 1% (or even 10%, in my experience). It is easy to mistake finally being around people who think like you do (as in high IQ) with being less intelligent than them, but this is a trick!
7OnTheOtherHandle12y
Sorry, you're right, I did know that. (And it's exasperating to see highly intelligent men make the rookie mistake of saying "women are stupid" or "most women are stupid" because they happen to be high-IQ. There's an obvious selection bias - intelligent men probably have intelligent male friends but only average female acquaintances - because they seek out the women for sex, not conversation.) I was thinking about "IQ differentials" in the very broad sense, as in "it sucks that anyone is screwed over before they even start." I also suffer from selection bias, because I seek out people in general for intelligence, so I see the men to the right of the bell curve, while I just sort of abstractly "know" there are more men than women to the left, too.

And it's exasperating to see highly intelligent men make the rookie mistake of saying "women are stupid" or "most women are stupid" because they happen to be high-IQ. There's an obvious selection bias - intelligent men probably have intelligent male friends but only average female acquaintances - because they seek out the women for sex, not conversation.

Another possible explanation comes to mind: people with high IQs consider the "stupid" borderline to be significantly above 100 IQ. Then if they associate equally with men and women, the women will more often be stupid; and if they associate preferentially with clever people, there will be fewer women.

(This doesn't contradict selection bias. Both effects could be at play.)

8ViEtArmis12y
You'd have to raise the bar really far before any actual gender-based differences showed up. It seems far more likely that the cause is a cultural bias against intellectualism in women (women will under-report IQ by 5ish points and men over-report by a similar margin, women are poorly represented in "smart" jobs, etc.). That makes women present themselves as less intelligent and makes everyone perceive them as less intelligent.
5juliawise12y
Does anyone know of a good graph that shows this? I've seen several (none citing sources) that draw the crossover in quite different places. So I'm not sure what the gender ratio is at, say, IQ 130.
3Vaniver12y
La Griffe Du Lion has good work on this, but it's limited to math ability, where the male mean is higher than the female mean as well as the male variance being higher than the female variance. The formulas from the first link work for whatever mean and variance you want to use, and so can be updated with more applicable IQ figures, and you can see how an additional 10 point 'reporting gap' affects things.
4OnTheOtherHandle12y
Unfortunately, intelligence in areas other than math seem to be an "I know it when I see it" kind of thing. It's much harder to design a good test for some of the "softer" disciplines, like "interpersonal intelligence" or even language skills, and it's much easier to pick a fight with results you don't like. It could be that because intelligence tests are biased toward easy measurement, they focus too much on math, so they under-predict women's actual performance at most jobs not directly related to abstract math skills.
0ViEtArmis12y
Of course, if you use IQ testing, it is specifically calibrated to remove/minimize gender bias (so is the SAT and ACT), and intelligence testing is horribly fraught with infighting and moving targets. I can't find any research that doesn't at least mention that social factors likely poison any experimental result. It doesn't help any that "intelligence" is poorly defined and thus difficult to quantify. Considering that men are more susceptible to critical genetic failure, maybe the mean is higher for men on some tests because the low outliers had defects that made them impossible to test (such as being stillborn)?
0OnTheOtherHandle12y
The SAT doesn't seem to be calibrated to make sure average scores are the same for math, at least. At least as late as 2006, there's still a significant gender gap.
0ViEtArmis12y
Apparently, the correction was in the form of altering essay and story questions to de-emphasize sports and business and ask more about arts and humanities. This hasn't been terribly effective. The gap is smaller in the verbal sections, but it's still there. Given that the entire purpose of the test is to predict college grades directly and women do better in college than men, explanations and theories abound.
0Desrtopa12y
Not a rigorously conducted study, but this (third poll) suggests a rather greater tendency to at least overestimate if not willfully over-report IQ, with both men and women overestimating, but men overestimating more.
4OnTheOtherHandle12y
You're right; my explanation was drawn from many PUA-types who had said similar things, but this effect is perfectly possible in non-sexual contexts, too. There's actually little use in using words like "stupid", anyway. What's the context? How intelligent does this individual need to be do what they want to do? Calling people "stupid" says "reaching for an easy insult," not "making an objective/instrumentally useful observation." Sure, there will be some who say they'll use the words they want to use and rail against "censorship", but connotation and denotation are not so separate. That's why I didn't find the various "let's say controversial, unspeakable things because we're brave nonconformists!" threads on this site to be all that helpful. Some comments certainly were both brave and insightful, but I felt on the whole a little bit of insight was brought at the price of a whole lot of useless nastiness.
5Jayson_Virissimo12y
Arguably, if it was "broken" this way it would be a mistake (specifically, of generalizing from too small a sample size). I have a job where I am constantly confronted with suffering and death, but at the end of the day, I can still laugh just like everyone else, because I know my experience is a biased sample and that there is still lots of good going on in the world.
2Rubix12y
I like this post more than I like most things; you've helped me, for one, with a significant amount of distress.

I had to face the fact that mere biology may have systematically biased my half of the population against greatness. And it hurt. I had to fight the urge to redefine intelligence and/or greatness to assuage the pain.

Consciously keeping your identity small and thus not identifying with everyone who happens to have the same internal plumbing might be helpful there.

9OnTheOtherHandle12y
PG is awesome, but his ideas do basically fall into the category of "easier said than done." This doesn't mean "not worth doing," of course, but practical techniques would be way more helpful. It's easier to replace one group with another (arguably better?) group than to hold yourself above groupthink in general.
5shminux12y
My approach is to notice when I want to say/write "we", as opposed to "I", and examine why. That's why I don't personally identify as a "LWer" (only as a neutral and factual "forum regular"), despite the potential for warm fuzzies resulting from such an identification. There is an occasional worthy reason to identify with a specific group, but gender/country/language/race/occupation/sports team are probably not good criteria for such a group.
1OnTheOtherHandle12y
Thank you! I'll look for that.
3shminux12y
Here is a typical LW comment that raises the "excessive group identification" red flag for me.
3ViEtArmis12y
I always think of that in the context of conflict resolution, and refer to it as "telling someone that what they did was idiotic, not that they are an idiot." Self-identifying is powerful, and people are pretty bad at it because of a confluence of biases.
6GLaDOS12y
Great to see you here and great to hear you took the time to read up on the relevant material before jumping in. I'm confident that you will find many people who comment quite a bit don't have such prudence, so don't be surprised if you outmatch a long time commenter. (^_^) Yesss! This is exactly how I felt when I found this community.
5Xachariah12y
I'm not sure about Disney, but the you should still be able to enjoy Avatar. Avatar (TLA and Korra) is in many ways a deconstruction of magical worlds. They take the basic premise of kung-fu magic and then let that propagate to it's logical conclusions. The TLA war was enabled by rapid industrialization when one nation realized they could harness their breaking the laws of thermodynamics for energy. The premise of S1 Korra is exploring social inequality in the presence of randomly distributed magical powers. In these ways, Avatar is less Harry Potter and more HPMoR.
0Alicorn12y
They run strongly in families (although it's not clear exactly how, since neither of Katara's parents appears to have been a waterbender). It's not really random.
0Xachariah12y
You are correct. I wouldn't consider it much different from personality. It's part heritable, part environmental and upbringing, and part randomness. Now you've got me wondering if philosophers in the Avatar universe have debates on whether your element/bending is nature vs nurture.
0OnTheOtherHandle12y
Now I want an ATLA fanfic infused with Star Trek-style pensive philosophizing. :D I would argue that it has even more potential than HP for a rationalist makeover. Aang stays in the iceberg and Sokka saves the planet?
-1OnTheOtherHandle12y
Honestly, I was disappointed with the ending of Season 1 Korra: (rot13) Nnat zntvpnyyl tvirf Xbeen ure oraqvat onpx nsgre Nzba gbbx vg njnl, naq gurer ner ab creznarag pbafrdhraprf gb nalguvat. I'm not necessarily idealistic enough to be happy with a world that has no consequences or really difficult choices; I'm just not cynical enough to find misanthropy and defeatism cool. That's why children's entertainment appeals to me - while it can be overly sugary-sweet, adult entertainment often seems to be both narrow and shallow, and at the same time cynical. Outside of science fiction, there doesn't seem to be much adult entertainment that's about things I care about - saving the world, doing something big and important and good. ETA: What Zach Weiner makes fun of here - that's what I'm sick of. Not just misanthropy and undiscriminating cynicism, but glorifying it as the height of intelligence. LessWrong seemed very pleasantly different in that sense.
1Bugmaster12y
I agree; I found the ending very disappointing, as well. The authors throw one of the characters into a very powerful personal conflict, making it impossible for the character to deny the need for a total accounting and re-evaluation of the character's entire life and identity. The authors resolve this personal conflict about 30 seconds later with a Deus Ex Machina. Bleh.
0Nornagest12y
Are you sure that's rot13? It's generating gibberish in two different decoders for me, although I'm pretty sure I know what you're talking about anyway. ETA: Yeah, looks like a shift of three characters right. ETA AGAIN: Fixed now, thanks.
0OnTheOtherHandle12y
Sorry, I dumped it into Briangle and forgot to change the setting.
-2Xachariah12y
V gubhtug vg jnf irel rssrpgvir. Gubhtu irvyrq fb xvqf jba'g pngpu vg, univat gur qnevat gb fubj n znva punenpgre pbagrzcyngvat naq nyzbfg nggrzcgvat fhvpvqr jnf n terng jnl gb pybfr gur nep. Gurer'f nyernql rabhtu 'npgvba' pbafrdhraprf qhr gb gur eribyhgvba, fb vg'f avpr onynapvat bhg univat gur irel raq or gur erfhygvat punatrf gb Xbeen'f punenpgre. Jura fur erwrpgf fhvpvqr nf na bcgvba, fur ernyvmrf gung fur ubyqf vagevafvp inyhr nf n uhzna orvat engure guna nf na Ningne. Cyhf nf bar bs gur ener srznyr yrnqf va puvyqera'f gryrivfvba, gur qenzngvp pyvznk bs gur fgbel orvat gur qr-bowrpgvsvpngvba bs gur srznyr yrnq vf uhtr. Nyfb gur nagv-fhvpvqr zrffntr orvat gung onq thlf pbzzvg zheqre/fhvpvqr naq gur tbbq thlf qba'g vf tbbq gb svavfu jvgu. V'z irel fngvfsvrq jvgu gurz raqvat vg gung jnl. Znal fubjf raq jvgu jvgu ovt onq orvat orngra. Fubjf gung cergraq gb or zngher unir cebgntbavfgf qvr ng gur raq. Ohg Xbeen'f raqvat vf bar bs gur bayl gung fgevxrf zr nf npghnyyl zngher, orpnhfr vg'f qverpgyl n zbeny/cuvybfbcuvpny ceboyrz ng gur raq.
0OnTheOtherHandle12y
Gung'f na vagrerfgvat jnl gb chg vg, naq V guvax V'z unccvre jvgu gur raqvat orpnhfr bs gung. Ubjrire, V jnf rkcrpgvat Frnfba Gjb gb or Xbeen'f wbhearl gbjneq erpbirel (rvgure culfvpny be zragny be obgu) nsgre Nzba gbbx njnl ure oraqvat. Vg'f abg gung V qba'g jnag ure gb or jubyr naq unccl; vg'f whfg gung vg frrzrq gbb rnfl. V gubhtug Nzba/Abngnx naq Gneybpx'f fgbel nep jnf zhpu zber cbjreshy. Va snpg, gurve zheqre/fhvpvqr frrzrq gb unir fb zhpu svanyvgl gung V svtherq vg zhfg or gur raq bs gur rcvfbqr hagvy V ernyvmrq gurer jrer fvk zvahgrf yrsg. Va bgure jbeqf, vg'f terng gung gur fgbel yraqf vgfrys gb gur vagrecergngvba gung vg jnf nobhg vagevafvp jbegu nf n uhzna orvat qvfgvapg sebz bar'f cbjref, ohg gurl unq n jubyr frnfba yrsg gb npghnyyl rkcyvpvgyl rkcyber gung. Nnat'f wbhearl jnf nobhg yrneavat gb fgbc ehaavat njnl naq npprcg gur snpg gung ur vf va snpg gur Ningne, naq ur pna'g whfg or nal bgure xvq naq sbetrg nobhg uvf cbjre naq erfcbafvovyvgl. Xbeen'f wbhearl jnf gb or nobhg npprcgvat gung whfg orpnhfr fur vf gur Ningne, naq fur ybirf vg naq qrevirf zrnavat sebz vg, qbrfa'g zrna fur'f abguvat zber guna n ebyr gb shysvyy. Vg sryg phg fubeg. Nnat tnir vg gb Xbeen; fur qvqa'g svaq vg sbe urefrys.
-1Desrtopa12y
V funerq BaGurBgureUnaqyr'f qvfnccbvagzrag jvgu gur raqvat, naq V jnfa'g irel vzcerffrq jvgu Xbeen'f rzbgvbany erfbyhgvba ng gur raq. Fur uvgf n anqve bs qrcerffvba, frrzvatyl pbagrzcyngrf fhvpvqr, naq gura... rirelguvat fhqqrayl erfbyirf vgfrys. Fur trgf ure oraqvat onpx, jvgubhg nal rssbeg be cynaavat, naq jvgu ab zber fvtavsvpnag punenpgre qrirybczrag guna univat orra erqhprq gb qrfcrengvba. Gur Ovt Onq vf xvyyrq ol fbzrbar ryfr juvyr gur cebgntbavfgf' nggragvba vf ryfrjurer, naq Xbeen tnvaf gur novyvgl gb haqb nyy gur qnzntr ur pnhfrq va gur svefg cynpr. Gur fbpvrgny vffhrf sebz juvpu ur ohvyg uvf onfr bs fhccbeg jrer yrsg hanqqerffrq, ohg jvgubhg n pyrne nirahr gb erfbyir gurz nf n pbagvahngvba bs gur qenzngvp pbasyvpg. Vs Xbeen unq orra qevira gb qrfcrengvba, naq nf n erfhyg, frnepurq uneqre sbe fbyhgvbaf naq sbhaq bar, V jbhyq unir sbhaq gung n ybg zber fngvfslvat. Gung'f bar bs gur ernfbaf V engr gur raqvat bs Ningne: Gur Ynfg Nveoraqre uvture guna gung bs gur svefg frnfba bs Xbeen. Vg znl unir orra vanqrdhngryl sberfunqbjrq naq orra fbzrguvat bs n Qrhf Rk Znpuvan, ohg ng yrnfg Nnat qrnyg jvgu n fvghngvba jurer ur jnf snprq jvgu bayl hanpprcgnoyr pubvprf ol frrxvat bgure nygreangvirf, svaqvat, naq vzcyrzragvat bar. Ohg Xbeen'f ceboyrzf jrer fbyirq, abg ol frrxvat fbyhgvbaf, ohg ol pbzvat va gbhpu jvgu ure fcvevghny fvqr ol ernpuvat ure rzbgvbany ybj cbvag. Jung Fcvevg!Nnat fnvq unf erny jbeyq gehgu gb vg. Crbcyr qb graq gb or zber fcvevghny va gurve ybjrfg naq zbfg qrfcrengr pvephzfgnaprf. Ohg engure guna orvat fbzrguvat gb ynhq, V guvax guvf ercerfragf n sbez bs tvivat hc, jurer crbcyr ghea gb gur fhcreangheny sbe fbynpr be ubcr orpnhfr gurl qba'g oryvrir gurl pna fbyir gurve ceboyrzf gurzfryirf. Fb nf erfbyhgvbaf bs punenpgre nepf tb, V gubhtug gung jnf n cerggl onq bar. Nyy va nyy V jnf n sna bs gur frevrf, ohg gur raqvat haqrefubg zl rkcrpgngvbaf.
2iceman12y
Have you seen the new My Little Pony show? It's really good. It's sweet without being twee.
2hankx778712y
I've been through this kind of thing before, and Less Wrong did nothing for me in this respect (although Less Wrong is awesome for many other reasons). Reading Ayn Rand on the other hand made all the difference in the world in this respect, and changed my life.
4OnTheOtherHandle12y
I haven't read Ayn Rand, but those who do seem to talk almost exclusively about the politics, and I just can't work up the energy to get too excited about something I have such little chance of affecting. Would you mind telling me where/how Ayn Rand discussed evolutionary psychology or modular minds? I'm curious now. :)
5OrphanWilde12y
She doesn't, is the short answer. She does discuss, however, the integration of personal values into one's philosophical system. I was struggling with a possibly similar issue; I had previously regarded rationalism as an end in itself. Emotions were just baggage that had to be overcome in order to achieve a truly enlightened state. If this sounds familiar to you, her works may help. The short version: You're a human being. An ethical system that demands you be anything else is fatally flawed; there is no universal ethical system, what is ethical for a rabbit is not ethical for a wolf. It's necessary for you to live, not as a rabbit, not as a rock, not as a utility or paperclip maximizer, but as a human being. Pain, for example, isn't to be denied - for to do so is as sensible as denying a rock - but experienced as a part of your existence. (That you shouldn't deny pain is not the same as that you should seek it; it is simply a statement that it's a part of what you are.) Objectivism, the philosophy she founded, is named on the claim that ethics are objective; not subjective, which is to say, whatever you want it to be; not universal, which is to say, there's a single ethics system in the whole universe that applies equally to rocks, rabbits, mice, and people; but objective, which is to say, it exists as a definable property for a given subject, given certain preconditions (ethical axioms; she chose "Life" as her ethical axiom).
8OnTheOtherHandle12y
I don't know that I would call that "objective." I mean, the laws of physics are objective because they're the same for rabbits and rocks and humans alike. I honestly don't trust myself to go much more meta than my own moral intuitions. I just try not to harm people without their permission or deceive/manipulate them. Yes, this can and will break down in extreme hypothetical scenarios, but I don't want to insist on an ironclad philosophical system that would cause me to jump to any conclusions on, say, Torture vs. Dust Specks just yet. I suspect that my abstract reasoning will just be nuts. My understanding of morality is basically that we're humans, and humans need each other, so we worked out ways to help one another out. Our minds were shaped by the same evolutionary processes, so we can agree for the most part. We've always seemed to treat those in our in-group the same way; it's just that those we included in the in-group changed. Slowly, women were added, and people of different races/religions, etc.
2hankx778712y
See this comment regarding this common confusion about 'objective'...
1thomblake12y
It's a sticky business, and different ethicists will frame the words different ways. On one view, objective includes "It's true even if you disagree" and subjective includes "You can make up whatever you want". On another, objective includes "It's the same for everybody" and subjective includes "It's different for different people". The first distinction better matches the usual meaning of 'objective', and the second distinction better matches the usual meaning of 'subjective', so I think the terms were just poorly-chosen as different sides of a distinction. Because of this, my intuition these days is to say that ethics is both subjective and objective, or "subjectively objective" as Eliezer has said about probability. Though I'd like it if we switched to using "subject-sensitive" rather than "subjective", as is now commonly used in Epistemology.
2TheOtherDave12y
So, this isn't the first time I've seen this distinction made here, and I have to admit I don't get it. Suppose I'm studying ballistics in a vacuum, and I'm trying to come up with some rules that describe how projectiles travel, and I discover that the trajectory of a projectile depends on its mass. I suppose I could conclude that ballistics is "subjectively objective" or "subject-sensitive," since after all the trajectory is different for different projectiles. But this is not at all a normal way of speaking or thinking about ballistics. What we normally say is that ballistics is "objective" and it just so happens that the proper formulation of objective ballistics takes projectile mass as a parameter. Trajectory is, in part, a function of mass. When we say that ethics is "subject-sensitive" -- that is, that what I ought to do depends on various properties of me -- are we saying it's different from the ballistics example? Or is this just a way of saying that we haven't yet worked out how to parametrize our ethics to take into account differences among individuals? Similarly, while we acknowledge that the same projectile will follow a different trajectory in different environments, and that different projectiles of the same mass will follow different trajectories in different environments, we nevertheless say that ballistics is "universal", because the equations that predict a trajectory can take additional properties of the environment and the projectile as parameters. Trajectory is, in part, a function of environment. When we say that ethics is not universal, are we saying it's different from the ballistics example? Or is this just a way of saying that we haven't yet worked out how to parametrize our ethics to take into account differences among environments?
0drethelin12y
I think it's an artifact of how we think about ethics. It doesn't FEEL like a bullet should fly the same exact way as an arrow or as a rock, but when you feel your moral intuitions they seem like they should obviously apply to everyone. Maybe because we learn about throwing things and motion through infinitely iterated trial and error, but we learn about morality from simple commands from our parents/teachers, we think about them in different ways.
2TheOtherDave12y
So, I'm not quite sure I understood you, but you seem to be explaining how someone might come to believe that ethics are universal/objective in the sense of right action not depending on the actor or the situation at all, even at relatively low levels of specification like "eat more vegetables" or whatever. Did I get that right? If so... sure, I can see where someone whose moral intuitions primarily derive from obeying the commands of others might end up with ethics that work like that.
0hankx778712y
"the proper formulation of objective ballistics takes projectile mass as a parameter" I think the best analogy here is to say something like, the proper formulation of decision theory takes terminal values as a parameter. Decision theory defines a "universal" optimum (that is, universal "for all minds"... presumably anyway), but each person is individually running a decision theory process as a function of their own terminal values - there is no "universal" terminal value, for example if I could build an AI then I could theoretically put in any utility function I wanted. Ethics is "universal" in the sense of optimal decision theory, but "person dependent" in the sense of plugging in one's own particular terminal values - but terminal values and ethics are not necessarily "mind-dependent", as explained here.
0TheOtherDave12y
I would certainly agree that there is no terminal value shared by all minds (come to that, I'm not convinced there are any terminal values shared by all of any given mind). Also, I would agree that when figuring out how I should best apply a value-neutral decision theory to my environment I have to "plug in" some subset of information about my own values and about my environment. I would also say that a sufficiently powerful value-neutral decision theory instructs me on how to optimize any environment towards any value, given sufficiently comprehensive data about the environment and the value. Which seems like another way of saying that decision theory is objective and universal, in the same sense that ballistics is. How that relates to statements about ethics being universal,objective, person-dependent, and/or mind-dependent is not clear to me, though, even after following your link.
0hankx778712y
Surprisingly, this isn't a bad short explanation of her ethics. I've been reading a lot of Aristotle lately (I highly recommend Aristotle by Randall, for anyone who is in to that kind of thing), and Rand mostly just brought Aristotle's philosophy into the 20th century - of course note now that it's the 21st century, so she is a little dated at this point. Take for example, Rand was offered by various people to get fully paid-for cryonics when she was close to death, but for unknown reasons she declined, very sadly (if you're looking for someone to take her philosophy into the 21st century, you will need to talk to, well... ahem... me). It's important to mention that politics is only one dimension of her philosophy and of her writing (although, naturally, it's the subject that all the pundits and mind-killed partisans obsess over) - and really it is the least important, since it is the most derivative of all of her other more fundamental philosophical ideas on metaphysics, epistemology, man's nature, and ethics.
1OrphanWilde12y
I'll willingly confess to not being interested in Aristotle in the least. Philosophy coursework cured me of interest in Greek philosophy. Give me another twenty years and I might recover from that. Have you read TVTropes' assessment of Objectivism? It's actually the best summary I've ever read, as far as the core of the philosophy goes.
0hankx778712y
No I haven't! That was quite good, thanks. By the way, I fully share yours (and Eliezer's) sentiment in regard to academic philosophy. I took a "philosophy of mind" course in college, thinking that would be extremely interesting, and I ended up dropping the class in short order. It was only after a long study of Rand that I ever became interested in philosophy again, once I realized I had a sane basis on which to proceed.
2ViEtArmis12y
Specifically, her non-fiction work (if you find that sort of thing palatable) provides a lot more concrete discussion of her philosophy. Unfortunately, Ayn Rand is little too... abrasive... for many people who don't agree entirely with her. She has a lot of resonant points that get rejected because of all the other stuff she presents along with it.
1Solvent12y
I wonder why it is that so many people get here from TV Tropes. Also, you're not the only one to give up on their first LW account.
7shokwave12y
Possibly: TV Tropes approaches fiction the way LessWrong approaches reality.
0Solvent12y
How do you mean?
0OnTheOtherHandle12y
At a guess, I would say: looking for recurring patterns in fiction, and extrapolating principles/tropes. It's a very bottom-up approach to literature, taking special note of subversions, inversions, aversions, etc, as opposed to the more top-down academic study of literature that loves to wax poetic about "universal truths" while ignoring large swaths of stories (such as Sci Fi and Fantasy) that don't fit into their grand model. Quite frankly, from my perspective, it seems they tend to force a lot of stories into their preferred mold, falling prey to True Art tropes.
3A1987dM12y
Because it uses as many examples from HP:MoR as it possibly could?
1Jayson_Virissimo12y
Welcome to Less Wrong! I would say something about a rabbit hole but it would be pointless, since you already seem to be descending at quite a high rate of speed.
0MBlume12y
We seem to have a lot of Airbender fans here at LW -- Alicorn was the one who started me watching it, and I know SarahC and rubix are fans. Welcome =)
-1RobertLumley12y
Did you see Brave? I thought it was great.
0OnTheOtherHandle12y
I did. :) I was so happy to see a mother-daughter movie with no romantic angle (other than the happily married king and queen).
0RobertLumley12y
I thought she was going to have to end up married at the end and I was so. angry. Brave ranked up there with Mulan in terms of kids movies that I think actually teach kids good lessons, which is a pretty high honor in my book.

Personally, for their first female protagonist, I felt like Pixar could have done a lot better than a Rebellious Princess. It's cliche, and I would have liked to see them exercise more creativity, but besides that, I think the instructive value is dubious. Yes, it's awfully burdensome to have one's life direction dictated to an excessive degree by external circumstances and expectations. But on the other hand, Rebellious Princesses, including Merida, tend to rail against the unfairness of their circumstances without stopping to consider that they live in societies where practically everyone has their lives dictated by external circumstances, and there's no easy transition to a social model that allows differently.

Merida wants to live a life where she's free to pursue her love of archery and riding, and get married when and to whom she wants? Well she'd be screwed if she were a peasant, since all the necessary house and field work wouldn't leave her with the time, her family wouldn't own a horse, unless it was a ploughhorse she wouldn't be able to take out for pleasure riding, and she'd be married off at an early age out of economic rather than political necessity. And she'd be sim... (read more)

5Bugmaster12y
I thought that Brave was actually a somewhat subversive movie -- perhaps inadvertently so. The movie is structured and presented in a way that makes it look like the standard Rebellious Princess story, with the standard feminist message. The protagonist appears to be a girl who overcomes the Patriarchy by transgressing gender norms, etc. etc. This is true to a certain extent, but it's not the main focus of the movie. Instead, the movie is, at its core, a very personal story of a child's relationship with her parent, the conflict between love and pride, and the difference between having good intentions and being able to implement them into practice. By the end of the movie, both Merida and her mother undergo a significant amount of character development. Their relationship changes not because the social order was reformed, or because gender norms were defeated -- but because they have both grown as individuals. Thus, Brave ends up being a more complex (and IMO more interesting) movie than the standard "Rebellious Princess" cliche would allow. In Brave, there are no clear villains; neither Merida nor her mother are wholly in the right, or wholly in the wrong. Contrast this with something like Disney's Rapunzel, where the mother is basically a glorified plot device, as opposed to a full-fledged character.
1wedrifid12y
How boring. Was there at least some monsters to fight or an overtly evil usurper to slay? What on earth remains as motivation to watch this movie?
1Alicorn12y
The antagonist is the rapey cultural artifact of forced marriage. Vg vf fynva.
3wedrifid12y
There should be a word for forcing other people to have sex (with each other, not yourself). The connotations of calling a forced arranged marriage 'rapey' should be offensive to the victims. It is grossly unfair to imply that the wife is a 'rapist' just because her husband's father forced his son to marry her for his family's political gain. (Or vice-versa.)
1Alicorn12y
I wasn't specifying who was being rapey. Just that the entire setup was rapey.
4wedrifid12y
That was clear and my reply applies. (The person to whom the applies is the person who forces the marriage. Rape(y/ist) would also apply if that person was also a participant in the marriage.)
2Bugmaster12y
As per my post above, I'd argue that the "rapey cultural artifact of forced marriage" is less of a primary antagonist, and more of a bumbling comic relief character.
0wedrifid12y
Cute rot13. I never would have predicted that in a Pixar animation!
0Desrtopa12y
There is an evil monster to fight, of a more literal sort, but it would be a bit of a stretch to call it the primary antagonist.
4Vaniver12y
Upvoted. My thoughts on Brave are over here, but basically Merida is actually a really dark character, and it's sort of sickening that she gets away with everything she does. Interesting enough to repeat is my suggestion for a better setting: Of course, it's difficult to make a movie glorifying sweatshop labor, whereas princesses are distant enough to be a tame example.
1OnTheOtherHandle12y
I understand your critique, and I mostly agree with it. I actually would have been even happier if Merida had bitten the bullet and married the winner - but for different reasons. She would have married because she loved her mother and her kingdom, and understood that peace must come at a cost - it would still very much count as a movie with no romantic angle. She would have been like Princess Yue in Avatar, a character I had serious respect for. When Yue was willing to marry Han for duty, and then was willing to fnpevsvpr ure yvsr gb orpbzr gur zbba, that was the first time I said to myself, "Wow, these guys really do break convention." Merida would have been a lot more brave to accept the dictates of her society (but for the right reasons), or to find a more substantial compromise than just convincing the other lords to yrg rirelbar zneel sbe ybir. But I still think it was a sweet movie.
2Desrtopa12y
I agree that it was a sweet movie, and overall I enjoyed watching it. The above critique is a lot harsher than my overall impression. But when I heard that Pixar was making their first movie with a female lead, I expected a lot out of them and thought they were going to try for something really exceptional in both character and message, and it ended up undershooting my expectations on those counts. I can sympathize with the extent to which simply having competent important female characters with relatable goals is a huge step forward for a lot of works. Ironically, I don't think I really grasped how frustrating the lack of them must be until I started encountering works which are supposed to be some sort of wish fulfillment for guys. There are numerous anime and manga, particularly harem series, which are full of female characters graced with various flavors of awesomeness, without any significant male protagonists other than the lead who's a total loser, and I find it infuriating when the closest thing I have to a proxy in the story is such a lousy and overshadowed character. It wasn't until I started encountering works like those that it hit me how painful it must be to be hard pressed to find stories that aren't like that on some level.
4OnTheOtherHandle12y
One thing that disappointed me about this whole story was that it was the one and only Pixar movie that was set in the past. Pixar has always been about sci fi, not fantasy, and its works have been set in contemporary America (with Magic Realism), alternate universes, or the future. Did "female protagonist" pattern-match so strongly with "rebellious medieval princess" that even Pixar didn't do anything really unusual with it? Even though I was happy Merida wasn't rebelling because of love, it seems like they stuck with the standard old-fashioned feminist story of resisting an arranged marriage, when they could have avoided all of that in a work set in the present or the future, when a woman would have more scope to really be brave. All in all, it seems like their father-son movie was a lot stronger than their mother-daughter movie.
1Nornagest12y
I don't think "This Loser Is You" is the right trope for that. Actually, I don't think TV Tropes has the right trope for that; as best I can tell, harem protagonists are the way they are not because they're supposed to stand for the audience in a representative sort of way but because they're designed as a receptacle for the audience to pour their various insecurities into. They can display negative traits, because that's assumed to make them more sympathetic to viewers that share them. But they can't display negative traits strong enough to be grounds for actual condemnation, or to define their characters unambiguously; you'll never see Homer Simpson as a harem lead. And they can't show positive traits except for a vague agreeableness and whatever supernatural powers the plot requires, because that breaks the pathos. Yes, Tenchi Muyo, that's you I'm looking at. More succinctly, we're all familiar with sex objects, right? Harem anime protagonists are sympathy objects.
0Desrtopa12y
I agree that This Loser Is You isn't quite the right trope. There's a more recent launch, Loser Protagonist, which doesn't quite describe it either, but uses the same name as I did when I tried to put the trope which I thought accurately described it through the YKTTW ages ago. If I understand what you mean by "sympathy objects," I think we have the same idea in mind. I tend to think of them as Lowest Common Denominator Protagonists, because they lack any sort of virtue or achievement that would alienate them from the most insecure or insipid audience members.
1RobertLumley12y
That's a very fair critique. A few things though: First, you might want to put that in ROT13 or add a [SPOILER](http://lh5.ggpht.com/_VZewGVtB3pE/S5C8VF3AgJI/AAAAAAAAAYk/5LJdTCRCb8k/eliezer_yudkowskyjpg_small.jpg) tag or something. Meridia learned to value her relationship with her mother, which I think a lot of kids need to hear going into adolescence. When you put it this way it doesn't seem nearly as trite as your phrasing makes it sound. Well yeah, but the answer to "society sucks and how can I fix it" isn't "oh it sucks for everyone and even more for others, I'll just sit down and shut up". (Not that you argue it is.) From TV Tropes: This is exactly why I thought Brave was good - it moved away from this trope. It wasn't "I don't love this person, I love this other person!", it was "I don't have to love/marry someone to be a competent and awesome person". She was the hero of her own story, and didn't need anyone else to complete her. That doesn't have to be true for everyone, but the counterpoint needs to be more present in society. And I said it ranked up there. Not that it passed Mulan. :) And it gets that honor by being literally one of the two movies I can think of that has a positive message in this respect. Although I will concede that I'm not very familiar with a particularly high number of kids movies.
5Desrtopa12y
I edited my comment to rot13 the ending spoilers; I left in the stuff that's more or less advertised as the premise of the movie. You might want to edit your reply so that it doesn't quote the uncyphered text. I think that's a valuable lesson, but I felt like Brave's presentation of it suffered for the fact that Merida and her mother really only reconcile after Merida essentially gets her way about everything. Teenagers who feel aggrieved in their relationships with their parents and think that they're subject to pointless unfairness are likely to come away with the lesson "I could get along so much better with my parents if they'd stop being pointlessly unfair to me!" rather than "Maybe I should be more open to the idea that my parents have legitimate reasons for not being accommodating of all my wishes, and be prepared to cut them some slack." A more well rounded version of the movie's approximate message might have been something like "Some burdensome social expectations and life restrictions have good reasons behind them and others don't, learn to distinguish between them so you can focus your effort on solving the right ones." But instead, it came off more like "Kids, you should love and appreciate your parents, at least when you work past their inclination to arbitrarily oppress you."
2OnTheOtherHandle12y
Now that I think about it, very few movies or TV shows actually teach that lesson. There are plenty of works of fiction that portray the whiney teenager in a negative light, and there are plenty that portray the unreasonable parent in a negative light, but nothing seems to change. It all plays out with the boring inevitability of a Greek tragedy.

I'm Aaron Swartz. I used to work in software (including as a cofounder of Reddit, whose software that powers this site) and now I work in politics. I'm interested in maximizing positive impact, so I follow GiveWell carefully. I've always enjoyed the rationality improvement stuff here, but I tend to find the lukeprog-style self-improvement stuff much more valuable. I've been following Eliezer's writing since before even the OvercomingBias days, I believe, but have recently started following LW much more carefully after a couple friends mentioned it to me in close succession.

I found myself wanting to post but don't have any karma, so I thought I'd start by introducing myself.

I've been thinking on-and-off about starting a LessWrong spinoff around the self-improvement stuff (current name proposal: LessWeak). Is anyone else interested in that sort of thing? It'd be a bit like the Akrasia Tactics Review, but applied to more topics.

8Jayson_Virissimo12y
Instead of a spinoff, maybe Discussion should be split into more sections (one being primarily about instrumental rationality/self-help).
3kilobug12y
Topic-related discussion seems a good idea to me. Some here may be interested in rationality/cognitive bias but not in IA or not in space exploration or not in cryonics, ... This would also allow to lift the "bans" like "no politics", if it says in a dedicated section not "polluting" those not interested in it.
0Jayson_Virissimo12y
I endorse this idea.
5ata12y
Yay, it is you! (I've followed your blog and your various other deeds on-and-off since 2002-2003ish and have always been a fan; good to have you here.)
3Jonathan_Graehl12y
LessWeak - good idea. On the name: cute but I imagine it getting old. But it's not as embarrassing as something unironically Courage Wolf, like 'LiveStrong'.
3Emile12y
Welcome to LessWrong! Apparently I used to comment on your blog back in 2004 - my, how time flies!
0[anonymous]11y
Reboot in peace, friend.

'Twas about time that I decided to officially join. I discovered LessWrong in the autumn of 2010, and so far I felt reluctant to actually contribute -- most people here have far more illustrious backgrounds. But I figured that there are sufficiently few ways in which I could show myself as a total ignoramus in an intro post, right?

I don't consider my gender, age and nationality to be a relevant part of my identity, so instead I'd start by saying I'm INTP. Extreme I (to the point of schizoid personality disorder), extreme T. Usually I have this big internal conflict going on between the part of me that wishes to appear as a wholly rational genius and the other part, who has read enough psychology and LW (you guys definitely deserve credit for this) to know I'm bullshitting myself big time.

My educational background so far is modest, a fact for which procrastination is the main culprit. I'm currently working on catching up with high school level math... so far I've only reviewed trigonometry, so I'm afraid I won't be able to participate in more technical discussions around here. Aside from a few Khan Academy videos, I'm still ignorant about probability; I did try to solve that cancer ... (read more)

2Swimmer963 (Miranda Dixon-Luinenburg) 12y
Welcome! That's interesting... I don't think I've ever had someone respond to my pointing out flaws in this way. I've had people argue back plenty of times, but never tell me that we shouldn't be arguing about it. Can you give some examples of topics where this has happened? I would be curious what kind of topics engender this reaction in people.

I've seen this happen where one person enjoys debate/arguing and another does not. To one person it's an interesting discussion, and to the other it feels like a personal attack. Or, more commonly, I've seen onlookers get upset watching such a discussion, even if they don't personally feel targeted. Specifically, I'm remembering three men loudly debating about physics while several of their wives left the room in protest because it felt too argumentative to them.

Body language and voice dynamics can affect this a lot, I think - some people get loud and frowny when they're excited/thinking hard, and others may misread that as angry.

7Nornagest12y
I ended up having to include a disclaimer in the FAQ for an older project of mine, saying that the senior staff tends to get very intense when discussing the project and that this doesn't indicate drama on our part but is actually friendly behavior. That was a text channel, though, so body dynamics and voice wouldn't have had anything to do with it. I think a lot of people just read any intense discussion as hostile, and quality of argument doesn't really enter into it -- probably because they're used to an arguments-as-soldiers perspective.

We used to say of two friends of mine that "They don't so much toss ideas back and forth as hurl sharp jagged ideas directly at one another's heads."

7gwern12y
--Steven Erikson, House of Chains (2002)
7Dahlen12y
Oh, it's not a topic-specific behavior. Every time I go too far down a chain of reasoning ("too far" meaning as few as three causal relationships), sometimes people start complaining that I'm giving too much thought to it, and imply they are unable to follow the arguments. I'm just not surrounded by a lot of people that like long and intricate discussions. (Funnily, both my parents are the type that get tired listening to complex reasoning, and I turned out the complete opposite.)
8Swimmer963 (Miranda Dixon-Luinenburg) 12y
That is...intensely frustrating. I've had people tell me that "well, I find all the points you're trying to make really complicated, and it's easier for me to just have faith in God" or that kind of thing, but I've never actually been rebuked for applying an analytical mindset to discussions. Props on having acquired those habits anyway, in spite of what sounds like an unfruitful starting environment!
1Dahlen12y
Thanks! Anyway, there's the internet to compensate for that. The wide range of online forums built around ideas of varied intellectual depth means you even get to choose your difficulty level...
3Davidmanheim12y
This happens frequently in places where reasoning is suspect, or not valued. Kids in poor areas with few scholastic or academic opportunities find more validation in pursuits that are non-academic, and they tend to deride logic. It's parodied well by Colbert, but it's not uncommon. I just avoid those people, now know few of them. Most of the crowd here, I suspect, is in a similar position.
0Swimmer963 (Miranda Dixon-Luinenburg) 12y
I may be in a similar position of never having known anyone who was like this. Also, I'm very conflict averse myself (but like discussing), so any discussion I start is less likely to have any component of raised voices or emotional involvement that could make it sound like an argument.
1Davidmanheim12y
The best way for me to get good at some particular type of math, or programming, or skill, in my experience, is to put yourself in a position where you need to do it for something. Find a job that requires you to do a bit of programming, or pick a task that requires it. Spend time on it, and you'll learn a bit. Then go back and realize you missed some basics, and pick them up. Oh, and read a ton. You're interested in a lot of things, and trying to catch up with what you feel you should know, which is wonderful. What do you do with your time? Are you working? College?
4Dahlen12y
I prefer the practice-based approach too, but from my position theoretical approaches are cheaper and much more available, if slower and rather tedious. In school they taught us that the only way to get better in an area is to do extra homework, and frankly my methods haven't improved much since. My usual way is to take an exercise book and solve everything in it, if that counts for practice; other than that, I only have the internet and a very limited budget. Senior year in high school. Right now I have 49 vacation days left, after which school will start, studying will get replaced with busywork and my learning rates will have no choice but to fall dramatically. So now I'm trying to maximize studying time while I still can... It's all kind of backwards, isn't it?
2Davidmanheim12y
Where you go to college and the amount of any scholarships you get are a bigger deal for your long term personal growth than any of the specific subjects you will learn right now. In the spirit of long term decision making, figure out where you want to go to college, or what your options are, and spend the summer maximizing the odds of getting in to your first choice schools. I cannot imagine that it won't be a better investment of your time than any one subject you are studying (unless you are preparing for SAT or some such test.) So I guess you should spend the summer on Khan, and learning and practicing vocabulary to get better at taking the tests that will get you into a great college, where your opportunities to learn are greatly expanded.
4Dahlen12y
I'm afraid all of this is not really applicable to me... My country isn't Western enough for such a wide range of opportunities. Here, institutes for higher education range from almost acceptable (state universities) to degree factories (basically all private colleges). Studying abroad in a Western country costs, per semester, somewhere between half and thrice my parents' yearly income. On top of everything, my grades would have to be impeccable and my performances worthy of national recognition for a foreign college to want me as a student so much as to step over the money issue and cover my whole tuition. (They're not, not by a long shot.) Thanks for the support, in any case...

I've commented infrequently, but never did one of these "Welcome!" posts.

Way back in the Overcoming Bias days, my roomate raved constantly about the blog and Eliezer Yudkowsky in particular. I pattern matched his behaviour to being in a cult, and moved on with my life. About two years later (?), a common friend of ours recommended Harry Potter and the Methods of Rationality, which I then read, which brought me to Lesswrong, reading the Sequences, etc. About a year later, I signed up for cryonics with Alcor, and I now give more than my former roomate to the Singularity Institute. (He is very amused by this.)

I spend quite a bit of time working on my semi-rationalist fanfic, My Little Pony: Friendship is Optimal, which I'll hopefully release on a timeframe of a few months. (I previously targeted releasing this damn thing for April, but...planning fallacy. I've whittled my issue list down to three action items, though, and it's been through it's first bout of prereading.)

My Little Pony: Friendship is Optimal

Want.

2maia12y
Could I convince you to perhaps post on the weekly rationality diaries about progress, or otherwise commit yourself, or otherwise increase the probability that you'll put this fic up soon? :D

Hi! I got here from reading Harry Potter and the Methods of Rationality, which I think I found on TV Tropes. Once I ran out of story to catch up on, I figured I'd start investigating the source material.

I've read a couple of sequences, but I'll hold off on commenting much until I've gotten through more material. (Especially since the quality of discussions in the comment sections is so high.) Thanks for an awesome site!

Hi All,

I'm Will Crouch. Other than one other, this is my first comment on LW. However, I know and respect many people within the LW community.

I'm a DPhil student in moral philosophy at Oxford, though I'm currently visiting Princeton. I work on moral uncertainty: on whether one can apply expected utility theory in cases where one is uncertain about what is of value, or what one one ought to do. It's difficult to do so, but I argue that you can.

I got to know people in the LW community because I co-founded two organisations, Giving What We Can and 80,000 Hours, dedicated to the idea of effective altruism: that is, using one's marginal resources in whatever way the evidence supports as doing the most good. A lot of LW members support the aims of these organisations.

I woudn't call myself a 'rationalist' without knowing a lot more about what that means. I do think that Bayesian epistemology is the best we've got, and that rational preferences should conform to the von Neumann-Morgenstern axioms (though I'm uncertain - there are quite a lot of difficulties for that view). I think that total hedonistic utilitarianism is the most plausible moral theory, but I'm extremely uncertain in that conclusion, partly on the basis that most moral philosophers and other people in the world disagree with me. I think that the more important question is what credence distribution one ought to have across moral theories, and how one ought to act given that credence distribution, rather than what moral theory one 'adheres' to (whatever that means).

9MixedNuts11y
Pretense that this comment has a purpose other than squeeing at you like a 12-year-old fangirl: what arguments make you prefer total utilitarianism to average?
9wdmacaskill11y
Haha! I don't think I'm worthy of squeeing, but thank you all the same. In terms of the philosophy, I think that average utilitarianism is hopeless as a theory of population ethics. Consider the following case: Population A: 1 person exists, with a life full of horrific suffering. Her utility is -100. Population B: 100 billion people exist, each with lives full of horrific suffering. Each of their utility levels is -99.9 Average utilitarianism says that Population B is better than Population A. That definitely seems wrong to me: bringing into existence people whose lives aren't worth living just can't be a good thing.
0A1987dM11y
That's not obvious to me. IMO, the reason why in the real world “bringing into existence people whose lives aren't worth living just can't be a good thing” is that they consume resources that other people could use instead; but if in the hypothetical you fix the utility of each person by hand, that doesn't apply to the hypothetical. I haven't thought about these things that much, but my current position is that average utilitarianism is not actually absurd -- the absurd results of thought experiments are due to the fact that those thought experiments ignore the fact that people interact with each other.
1Pablo11y
I don't understand your comment. Average utilitarianism implies that a world in which lots and lots of people suffer a lot is better than a world in which a single individual suffers just a little bit more. If you don't think that such a world would be better, then you must agree that average utilitarianism is false. Here's another, even more obviously decisive, counterexample to average utilitariainsm. Consider a world A in which people experience nothing but agonizing pain. Consider next a different world B which contains all the people in A, plus arbitrarily many more people all experiencing pain only slightly less intense. Since the average pain in B is less than the average pain in A, average utilitarianism implies that B is better than A. This is clearly absurd, since B differs from A only in containing a surplus of agony.
0A1987dM11y
I do think that the former is better (to the extent that I can trust my intuitions in a case that different from those in their training set).
7wdmacaskill11y
Interesting. The deeper reasons why I reject average utilitarianism is that it makes the value of lives non-seperable. "Separability" of value just means being able to evaluate something without having to look at anything else. I think that, whether or not it's a good thing to bring a new person into existence depends only on facts about that person (assuming they don't have any causal effects on other people): the amount of their happiness or suffering. So, in deciding whether to bring a new person into existence, it shouldn't be relevant what happened in the distant past. But average utilitarianism makes it relevant: because long-dead people affect the average wellbeing, and therefore affect whether it's good or bad to bring that person into existence. But, let's return to the intuitive case above, and make it a little stronger. Now suppose: Population A: 1 person suffering a lot (utility -10) Population B: That same person, suffering an arbitrarily large amount (utility -n, for any arbitrarily large n), and a very large number, m, of people suffering -9.9. Average utilitarianism entails that, for any n, there is some m such that Population B is better than Population A. I.e. Average utilitarianism is willing to add horrendous suffering to someone's already horrific life, in order to bring into existence many other people with horrific lives. Do you still get the intuition in favour of average here?
4TorqueDrifter11y
Suppose your moral intuitions cause you to evaluate worlds based on your prospects as a potential human - as in, in pop A you will get utility -10, in pop B you get an expected (1/m)(-n) + (m-1/m)(-9.9). These intuitions could correspond to a straightforward "maximize expected util of 'being someone in this world'", or something like "suppose all consciousness is experienced by a single entity from multiple perspectives, completing all lives and then cycling back again from the beginning, maximize this being's utility". Such perspectives would give the "non-intuitive" result in these sorts of thought experiments.
2TorqueDrifter11y
Hm, a downvote. Is my reasoning faulty? Or is someone objecting to my second example of a metaphysical stance that would motivate this type of calculation?
0MugaSofer11y
Perhaps people simply objected to the implied selfish motivations.
3TorqueDrifter11y
Perhaps! Though I certainly didn't intend to imply that this was a selfish calculation - one could totally believe that the best altruistic strategy is to maximize the expected utility of being a person.
1A1987dM11y
Once you make such an unrealistic assumption, the conclusions won't necessarily be non-unrealistic. (If you assume water has no viscosity, you can conclude that it exerts no drag on stuff moving in it.) In particular, ISTM that as long as my basic physiological needs are met, my utility almost exclusively depends on interacting with other people, playing with toys invented by other people, reading stuff written by other people, listening to music by other people, etc.
0drnickbone11y
When discussing such questions, we need to be careful to distinguish the following: 1. Is a world containing population B better than a world containing population A? 2. If a world with population A already existed, would it be moral to turn it into a world with population B? 3. If Omega offered me a choice between a world with population A and a world with population B, and I had to choose one of them, knowing that I'd live somewhere in the world, but not who I'd be, would I choose population B? I am inclined to give different answers to these questions. Similarly for Parfit's repugnant conclusion; the exact phrasing of the question could lead to different answers. Another issue is background populations, which turn out to matter enormously for average utilitarianism. Suppose the world already contains a very large number of people wth average utility 10 (off in distant galaxies say) and call this population C. Then the combination of B+C has lower average utility than A+C, and gets a clear negative answer on all the questions, so matching your intuition. I suspect that this is the situation we're actually in: a large, maybe infinite, population elsewhere that we can't do anything about, and whose average utility is unknown. In that case, it is unclear whether average utilitarianism tells us to increase or decrease the Earth's population, and we can't make a judgement one way or another.
-1MugaSofer11y
While I am not an average utilititarian, (I think,) A world containing only one person suffering horribly does seem kinda worse.
0Pablo11y
Both worlds contain people "suffering horribly".
-1MugaSofer11y
One world contains pople suffering horribly. The other contains a person suffering horribly. And no-one else.
0Pablo11y
So, the difference is that in one world there are many people, rather than one person, suffering horribly. How on Earth can this difference make the former world better than the latter?!
-2MugaSofer11y
Because it doesn't contain anyone else. There's only one human left and they're "suffering horribly".
0Pablo11y
Suppose I publicly endorse a moral theory which implies that the more headaches someone has, the better the world becomes. Suppose someone asks me to explain my rationale for claiming that a world that contains more headaches is better. Suppose I reply by saying, "Because in this world, more people suffer headaches." What would you conclude about my sanity?
-1MugaSofer11y
Most people value humanity's continued existence.
6Nisan11y
I'm glad you're here! Do you have any comments on Nick Bostrom and Toby Ord's idea for a "parliamentary model" of moral uncertainty?
7wdmacaskill11y
Thanks! Yes, I'm good friends with Nick and Toby. My view on their model is as follows. Sometimes intertheoretic value comparisons are possible: that is, we can make sense of the idea that the difference in value (or wrongness) between two options A and B one one moral theory is greater, lesser, or equal to the difference in value (or wrongness) between two options C and D on another moral theory. So, for example, you might think that killing one person in order to save a slightly less happy person is much more wrong according to a rights-based moral view than it is according to utilitarianism (even though it's wrong according to both theories). If we can make such comparisons, then we don't need the parliamentary model: we can just use expected utility theory. Sometimes, though, it seems that such comparisons aren't possible. E.g. I add one person whose life isn't worth living to the population. Is that more wrong according to total utilitarianism or average utilitarianism? I have no idea. When such comparisons aren't possible, then I think that something like the parliamentary model is the right way to go. But, as it stands, the parliamentary model is more of a suggestion than a concrete proposal. In terms of the best specific formulation, I think that you should normalise incomparable theories at the variance of their respective utility functions, and then just maximise expected value. Owen Cotton-Barratt convinced me of that! Sorry if that was a bit of a complex response to a simple question!
3beoShaffer11y
Hi Will, I think most LWer's would agree that; "Anyone who tries to practice rationality as defined on Less Wrong." is a passible description of what we mean by 'rationalist'.
3wdmacaskill11y
Thanks for that. I guess that means I'm not a rationalist! I try my best to practice (1). But I only contingently practice (2). Even if I didn't care one jot about increasing happiness and decreasing suffering in the world, then I think I still ought to increase happiness and decrease suffering. I.e. I do what I do not because it's what I happen to value, but because I think it's objectively valuable (and if you value something else, like promoting suffering, then I think you're mistaken!) That is, I'm a moral realist. Whereas the definition given in Eliezer's post suggests that being a rationalist presupposes moral anti-realism. When I talk with other LW-ers, this often seems to be a point of disagreement, so I hope I'm not just being pedantic!
7thomblake11y
Not at all. (Eliezer is a sort of moral realist). It would be weird if you said "I'm a moral realist, but I don't value things that I know are objectively valuable". It doesn't really matter whether you're a moral realist or not - instrumental rationality is about achieving your goals, whether they're good goals or not. Just like math lets you crunch numbers, whether they're real statistics or made up. But believing you shouldn't make up statistics doesn't therefore mean you don't do math.
0Pablo11y
Could you provide a link to a blog post or essay where Eliezer endorses moral realism? Thanks!
1thomblake11y
Sorting Pebbles Into Correct Heaps notes that 'right' is the same sort of thing as 'prime' - it refers to a particular abstraction that is independent of anyone's say-so. Though Eliezer is also a sort of moral subjectivist; if we were built differently, we would be using the word 'right' to refer to a different abstraction. Really, this is just shoehorning Eliezer's views into philosophical debates that he isn't involved in.
0somervta11y
"It doesn't really matter whether you're a moral realist or not - instrumental rationality is about achieving your goals, whether they're good goals or not." It seems to me that moral realism is an epistemic claim - it is a statement about how the world is - or could be - and that is definitely a matter that impinges on rationality.
0Kindly11y
This seems to be similar to Eliezer's beliefs. Relevant quote from Harry Potter and the Methods of Rationality:
0somervta11y
I don't think that's what Harry is saying there. Your quote from HPMOR seems to me to be more about the recognition that moral considerations are only one aspect of a decision-making process (in humans, anyway), and that just because that is true doesn't mean that moral considerations won't have an effect.

Hello, everyone!

I'd been religious (Christian) my whole life, but was always plagued with the question, "How would I know this is the correct religion, if I'd grown up with a different cultural norm?" I concluded, after many years of passive reflection, that, no, I probably wouldn't have become Christian at all, given that there are so many good people who do not. From there, I discovered that I was severely biased toward Christianity, and in an attempt to overcome that bias, I became atheist before I realized it.

I know that last part is a common idiom that's usually hyperpole, but I really did become atheist well before I consciously knew I was. I remember reading HPMOR, looking up lesswrong.com, reading the post on "Belief in Belief", and realizing that I was doing exactly that: explaining an unsupported theory by patching the holes, instead of reevaluating and updating, given the evidence.

It's been more than religion, too, but that's the area where I really felt it first. Next projects are to apply the principles to my social and professional life.

-3jacoblyles12y
Welcome! The least attractive thing about the rationalist life-style is nihilism. It's there, it's real, and it's hard to handle. Eliezer's solution is to be happy and the nihilism will leave you alone. But if you have a hard life, you need a way to spontaneously generate joy. That's why so many people turn to religion as a comfort when they are in bad situations. The problem that I find is that all ways to spontaneously generate joy have some degree of mysticism. I'm looking into Tai Chi as a replacement for going to church. But that's still eastern mumbo-jumbo as opposed to western mumbo-jumbo. Stoicism might be the most rational joy machine I can find. Let me know if you ever un-convert.

The problem that I find is that all ways to spontaneously generate joy have some degree of mysticism.

What? What about all the usual happiness inducing things? Listening to music that you like; playing games; watching your favourite TV show; being with friends? Maybe you've ruled these out as not being spontaneous? But going to church isn't less effort than a lot of things on that list.

I suspect that a tendency towards mysticism just sort of spontaneously accretes onto anything sufficiently esoteric; you can see this happening over the last few decades with quantum mechanics, and to a lesser degree with results like Gödel's incompleteness theorems. Martial arts is another good place to see this in action: most of those legendary death touch techniques you hear about, for example, originated in strikes that damaged vulnerable nerve clusters or lymph nodes, leading to abscesses and eventually a good chance of death without antibiotics. All very explicable. But layer the field's native traditional-Chinese-medicine metaphor over that and run it through several generations of easily impressed students, partial information, and novelists without any particular incentive to be realistic, and suddenly you've got the Five-Point Palm Exploding Heart Technique.

So I don't think the mumbo-jumbo is likely to be strictly necessary to most eudaemonic approaches, Eastern or Western. I expect it'd be difficult to extract from a lot of them, though.

2Oligopsony12y
It would be difficult to do it on your own, but it's not very hard to find e.g. guides to meditation that have been bowlderized of all the mysterious magical stuff.
0moocow145212y
Maybe it's incomprehensibility itself that makes some people happy? If you don't understand it, you don't feel responsible, and ignorance being bliss, all that weird stuff there is not your problem, and that's the end of it as far as your monkey bits are concerned.

Hello everyone,

Thought it was about time to do one of these since I've made a couple of comments!

My name's Carl. I've been interested in science and why people believe the strange things they believe for many years. I was raised Catholic but came to the conclusion around the age of ten that it was all a bit silly really, and as yet I have found no evidence that would cause me to update away from that.

I studied physics as an undergrad and switched to experimental psychology for my PhD, being more interested at that point in how people work than how the universe does. I started to study motor control and after my PhD and a couple of postdocs I know way more about how humans move their arms than any sane person probably should. I've worked in behavioural, clinical and computational realms, giving me a wide array of tools to use when analysing problems.

My current postdoc is coming to an end and a couple of months ago I was undergoing somewhat of a crisis. What was I doing, almost 31 and with no plan for my life? I realised that motor control had started to bore me but I had no real idea what to do about it. Stay in science, or abandon it and get a real job? That hurts after almost a de... (read more)

[-][anonymous]11y260

Greetings LWers,

I'm an aspiring Friendliness theorist, currently based at the Australian National University -- home to Marcus Hutter, Rachael Briggs and David Chalmers, amongst others -- where I study formal epistemology through the Ph.B. (Hons) program.

I wasn't always in such a stimulating environment -- indeed I grew up in what can only be deemed intellectual deprivation, from which I narrowly escaped -- and, as a result of my disregard for authority and despise for traditional classroom learning, I am largely self-taught. Unlike most autodidacts, though, I never was a voracious reader, on the contrary I barely opened books at all, instead preferring to think things over in my head; this has left me an ignorant person -- something I'm constantly striving to improve on -- but has also protected me from many diseased ideas and even allowed me to better appreciate certain notions by having to rediscover them myself. (case in fact, throughout my adolescence I took great satisfaction in analysing my mental mechanisms and correcting for what I now know to be biases, yet I never came across the relevant literature, essentially missing out on a wealth of knowledge)

For a long time I've a... (read more)

Nice! What part of FAI interests you?

2[anonymous]11y
Too soon to say, as I discovered FAI a mere two months ago -- this, incidentally, could mean that it's a fleeting passion -- but CEV has definitely caught my attention, while the concept of a reflective decision theory I find really fascinating. The latter is something I've been curious about for quite some time, as plenty of moral precepts seem to break down once an agent -- even a mere homo sapiens -- reaches certain levels of self-awareness and, thus, is able to alter their decision mechanisms.
2Kawoomba11y
Isn't that a proper IQ test? At least it is where I live. Funny how we like to talk about things we're good at. The real test is "time from passing test to time you leave to save the yearly fee." That's awesome. Don't miss Marcus' lectures, such a sharp mind. Also, midi - Imperial March (used to be?) playing on his home page.
2[anonymous]11y
Yes and no; it's some version of the Cattell, but it's not administered individually, has a lowish ceiling and they don't reveal your exact result. For the record, you needn't join in order to take their heavily subsidised admission test.
0Kawoomba11y
Is your info Aussie-specific? (EDIT: We're not quite antipodes, but not far off, either) They did when I took it, ceiling 145, was administered in a group setting. 'Twas free even, in my case, some kind of promo action.
0[anonymous]11y
Yep I had Australia in mind, though it's by no means the only country where it works that way. Also, various national Mensa chapters have stopped releasing scores -- something to do with egalitarianism, go figure... -- and pardon my imprecise language, but by lowish I meant around 145 SD15. (didn't mean it in a patronising manner, it's just that plenty of tests have a ceiling of 160 SD15 and some, e.g. Stanford-Binet Form L-M, are employed even above that cutoff)
0Kawoomba11y
I do wonder if someone who'd score, say 155 on a 160 ceiling test would probably score 145 on a 145 ceiling test. You project an aura of knowledgeability on the subject, so I'll just go ahead and ask you. Consider yourself asked.
1[anonymous]11y
I'm afraid I'm not sufficiently knowledgeable to answer that and I have no intention of becoming one of those self-proclaimed internet experts! (plus the rest of the internet, outside of LW, already does a good enough job at spreading misinformation)
-1shminux11y
"machine/emergent intelligence theorist" would not box you in as much. Friendliness is only one model, you know, no matter how convincing it may sound.
2[anonymous]11y
"machine intelligence researcher" is also much more employable -- which isn't saying much.
6[anonymous]11y
One can signal differently to make oneself more palatable to different audiences and, indeed, "machine/emergent intelligence theorist" is less confining, while "machine intelligence researcher" is more suitable for academia or industry; here at LW, however, I needn't conceal my specific interests, which happen to be in AI safety and friendliness.
[-][anonymous]12y260

Hello everyone! I've been a lurker on here for awhile, but this is my first post. I've held out on posting anything because I've never felt like I knew enough to actually contribute to the conversation. Some things about me:

I'm currently 22, female, and a recent graduate of college with a degree in computer science. I'm currently employed as a software engineer at a health insurance company, though I am looking into getting into research some day. I mainly enjoy science, playing video games, and drawing.

I found this site through a link on the Skeptics Stack Exchange page. The post was about cryonics, which is how I got over here. I've been reading the site for about six months now and I have found it extremely helpful. It has also been depressing, though, because I've since realized many of the "problems" in the world were caused by the ineptitude of the species and aren't easily fixed. I've had some problems with existential nihilism since then and if anyone has any advice on the matter, I'd love to hear it.

My journey to rationality probably started with atheism and a real understanding of the scientific method and human psychology. I grew up Mormon, which has since give... (read more)

0fiddlemath12y
You describe "problems with existential nihilism." Are these bouts of disturbed, energy-sucking worry about the sheer uselessness of your actions, each lasting between a few hours and a few days? Moreover, did you have similar bouts of worry about other important seeming questions before getting into LW?
0[anonymous]12y
Yes, that is how I would describe it. It normally comes and goes, with the longest period lasting a few weeks. I'm not entirely sure if it's a byproduct of recent life events or if I am suffering from regular depression, but it's something I've had on and off for a few years. LW hasn't specifically made it worse, but it hasn't made it better either.
0fiddlemath12y
In that case, it sounds very, very similar to what I've learned to deal with -- especially as you describe feeling isolated from the people around you. I started to write a long, long comment, and then realized that I'd probably seen this stuff written down better, somewhere. This matches my experience precisely. For me, the most important realization was that the feeling of nihilism presents itself as a philosophical position, but is never caused or dispelled by philosophy. You can ruminate forever and find no reason to value anything; philosophical nihilism is fully internally consistent. Or, you can get exercise, and spend some time with friends, and feel better due not to philosophy, but to physiology. (I know this is glib, and that getting exercise when you just don't care about anything isn't exactly easy. The link above discusses this.) That above post, and Alicorn's sequence on luminosity -- effective self-awareness -- probably lay out the right steps to take, if you'd like to most-effectively avoid these crappy moods. Moreover, if you'd like to chat more, over skype some time, or via pm, or whatever, I'd be happy to. I'm pretty busy, so there may be high latency, but it sounds like you're dealing with things that are very similar to my own experience, and I've partly learned how to handle this stuff over the past few years.

Hi! Long-time lurker, first-time... joiner?

I was inspired to finally register by this post being at the top of Main. Not sure yet how much I'll actually post, but the removal of the passive barrier of, you know, not actually being registered is gone, so we'll see.

Anyway. I'm a dude, live in the Bay Area, work in finance though I secretly think I'm actually a writer. I studied cog sci in college, and that angle is what I tend to find most interesting on Less Wrong.

I originally came across LW via HPMoR back in 2010. Since then, I've read the Sequences, been to a few meetups, and attended the June minicamp (which, P.S., was awesome).

I'm still struggling a bit with actually applying rationality tools in my life, but it's great to have that toolbox ready and waiting. Sometimes... I hear it calling out to me. "Sean! This is an obvious place to apply Bayes! Seaaaaaaan!"

5Nisan12y
Welcome!

Hi all,

Not quire recently joined, but when I first joined, I read some, then got busy and didn't participate after that.

Age: Not yet 30. Former Occupation: Catastrophe Risk Modeling New Occupation: Graduate Student, Public Policy, RAND Corporation.

Theist Status: Orthodox Jew, happy with the fact that there are those who correctly claim that I cannot prove that god exists, and very aware of the confirmation bias and lack of skepticism in most religious circles. It's one reason I'm here, actually. And I'll be glad to discuss it in the future, elsewhere.

I was initially guided here, about a year ago, by a link to The Best Textbooks on Every Subject . I was a bit busy working at the time, building biased mathematical models of reality. (Don't worry, they weren't MY biases, they were those of the senior people and those of the insurance industry. And they were normalized to historical experience, so as long as history is a good predictor of the future...) So I decided that I wanted to do something different, possibly something with more positive externalities, less short term thinking about how the world could be more profitable for my employer, and more long-term thinking about how it ... (read more)

Hello and goodbye.

I'm a 30 year old software engineer with a "traditional rationalist" science background, a lot of prior exposure to Singularitarian ideas like Kurzweil's, with a big network of other scientist friends since I'm a Caltech alum. It would be fair to describe me as a cryocrastinator. I was already an atheist and utilitarian. I found the Sequences through Harry Potter and the Methods of Rationality.

I thought it would be polite, and perhaps helpful to Less Wrong, to explain why I, despite being pretty squarely in the target demographic, have decided to avoid joining the community and would recommend the same to any other of my friends or when I hear it discussed elsewhere on the net.

I read through the entire Sequences and was informed and entertained; I think there are definitely things I took from it that will be valuable ("taboo" this word; the concept of trying to update your probability estimates instead of waiting for absolute proof; etc.)

However, there were serious sexist attitudes that hit me like a bucket of cold water to the face - assertions that understanding anyone of the other gender is like trying to understand an alien, for example.

Com... (read more)

Thanks for writing this. It's true that LW has a record of being bad at talking about gender issues; this is a problem that has been recognized and commented on in the past. The standard response seems to have been to avoid gender issues whenever possible, which is unfortunate but maybe better than the alternative. But I would still like to comment on some of the specific things you brought up:

assertions that understanding anyone of the other gender is like trying to understand an alien, for example.

I think I know the post you're referring to, I didn't read this as sexist, and I don't think that indicates a male-techy failure mode on my part about sexism. Some men are just really, really bad at understanding women (and maybe commit the typical mind fallacy when they try to understand men, and maybe just don't know anyone who doesn't fall into one of those categories), and I don't think they should be penalized for being honest about this.

gender essentialist

I haven't seen too much of this. Edit: Found some more.

women-are-objects-not-people-like-us crap

Where? Edit: Found some of this too.

I think it has fallen very squarely into the "nothing more than sexism, th

... (read more)
4Eliezer Yudkowsky11y
Try to keep in mind selection effects. The post was titled Failed Utopia - people who agreed with this may have posted less than those who disagreed. I confess to being somewhat surprised by this reaction. Posts and comments about gender probably constitute around 0.1% of all discussion on LessWrong.

Whenever I see a high quality comment made by a deleted account (see for example this thread where the two main participants are both deleted accounts), I'd want to look over their comment history to see if I can figure out what sequence of events alienated them and drove them away from LW, but unfortunately the site doesn't allow that. Here SamLL provided one data point, for which I think we should be thankful, but keep in mind that many more people have left and not left visible evidence of the reason.

Also, aside from the specific reasons for each person leaving, I think there is a more general problem: why do perfectly reasonable people see a need to not just leave LW, but to actively disidentify or disaffiliate with LW, either through an explicit statement (SamLL's "still less am I enthused about identifying myself as part of a community where that's so widespread"), or by deleting their account? Why are we causing them to think of LW in terms of identity in the first place, instead of, say, a place to learn about and discuss some interesting ideas?

Why are we causing them to think of LW in terms of identity in the first place, instead of, say, a place to learn about and discuss some interesting ideas?

Some possibilities:

  1. There have been deliberate efforts at community-building, as evidenced by all the meetup-threads and one whole sequence, which may suggest that one is supposed to identify with the locals. Even relatively innocuous things like introduction and census threads can contribute to this if one chooses to take a less than charitable view of them, since they focus on LW itself instead of any "interesting idea" external to LW.

  2. Labeling and occasionally hostile rhetoric: Google gives dozens of hits for terms like "lesswrongian" and "LWian", and there have been recurring dismissive attitudes regarding The Others and their intelligence and general ability. This includes all snide digs at "Frequentists", casual remarks to the effect of how people who don't follow certain precepts are "insane", etc.

  3. The demographic homogeneity probably doesn't help.

4Wei Dai11y
I agree with these, and I wonder how we can counteract these effects. For example I've often used "LWer" as shorthand for "LW participant". Would it be better to write out the latter in full? Should we more explicitly invite newcomers to think of LW in instrumental/consequentialist terms, and not in terms of identity and affiliation? For example, we could explain that "joining the LW community" ought to be interpreted as "making use of LW facilities and contributing to LW discussions and projects" rather than "adopting 'LW member' as part of one's social identity and endorsing some identifying set of ideas", and maybe link to some articles like Paul Graham's Keep Your Identity Small.
[-][anonymous]11y300

"Here at LW, we like to keep our identity small."

4shminux11y
Nice one.
0[anonymous]11y
I think so. The other thing about "snide digs" the grandparent is talking about is they are not just bad image, they are also wrong (as in incorrect). I think the LW "hit rate" on specific enough technical matters is not all that good, to be honest.
0shminux11y
One of the times the issue of overidentifying with LW came up here, about a year ago, I mentioned that my self-description is "LW regular [forum participant]". It means that I post regularly, but does not mean that I derive any sense of identity from it. "LWer" certainly sounds more like "this is my community", so I stay away from using it except toward people who explicitly self-identify as such. I also tend to discount quite a bit of what someone here posts, once I notice them using the pronoun "we" when describing the community, unless I know for sure that they are not caught up in the sense of belonging to a group of cool "rationalists".
5satt11y
I think the "LWer" appellation is just plain accurate (but then I've used the term myself). Any blog with a regular group of posters & commenters constitutes a community, so LW is a community. Posting here regularly makes us members of this community by default, and being coy about that fact would make me feel odd, given that we've strewn evidence of it all over the site. But I suspect I'm coming at this issue from a bit of an odd angle.
8prase11y
It may be because lot of LW regulars visibly think of it in terms of identity. LW is described by most participants as a community rather than a discussion forum, and there has been a lot of explicit effort to strengthen the communitarian aspect.
2Kawoomba11y
As a hypothesis, they may be ambivalent about discontinuing their hobby ("Two souls alas! are dwelling in my breast; (...)) and prefer to burn their bridges to avoid further ambivalence and decision pressures. Many prefer a course of action being locked in, as opposed to continually being tempted by the alternative.
1Eugine_Nier11y
Some people come from a background where they're taught to think of everything in terms of identity.
0Kindly11y
LW is a hub for several abnormal ideas. An implication that you're affiliated with LW is an implication that you take these ideas seriously, which no reasonable person would do.
8Kawoomba11y
Your comment's first sentence answers your second paragraph.
2Risto_Saarelma11y
I guess you get considered fully unclean even if you're only observed breaking a taboo a few times.
4A1987dM11y
Did you use a Rawlsian veil of ignorance when judging it? From a totally selfish point of view, I would very, very, very much rather be myself in this world than myself in that scenario (given that, among plenty of other things, I dislike most people of my gender), but think of, say, starving African children or people with disabilities. I don't know much about what it feels like to be in such dire straits so I'm not confident that I'd rather be a randomly chosen person in Failed Utopia 4-2 than a randomly chosen person in the actual world, but the idea doesn't sound obviously absurd to me.
0Kawoomba11y
Is that ... like ... allowed? edit: I agree with you and object to all the conditioning against contradicting "sacred" values (sexism = ugh, bad).
2A1987dM11y
By whom? (Of course, that's not literally true, since the overwhelming majority of all 3.5 billion male humans alive are people I've never met or heard of and so I have little reason to dislike, but...)
4Kawoomba11y
Since I cannot imagine anything but a few cherry picked examples that could have led to your impression, let me use some of my own (the number of cases is low): The extremely positive reception of Alicorns "Living Luminously" sequence (karma +50 for the main post alone, Anja's great and technical posts (karmas +13, +34, +29) all indicate that good content is not filtered along gender lines, which it should be if there were some pervasive bias. Even asserting that understanding anyone of the other gender is "like trying to understand an alien" does not imply any sort of male superiority complex. If you object to sexism as just pointing out that there are differences both based on culture and genetics, well you got me there. Quite obviously there are, I assume you don't live in a hermaphrodite community. Why is it bad when/if that comes up? Forbidden knowledge? Are you sure that's the rationalist thing to do? Gender imbalance and a few misplaced or easily misinterpreted remarks need not be representative of a community, just as a predominantly male CS program at Caltech and frat jokes need not be representative of College culture.
6jooyous11y
It's possible that user is sensitive to gender issues precisely because it's comparatively difficult and not entirely rationalist to leave a community like Caltech. It's generally the stance of gender-sensitive humans that no one should have to listen to the occasional frat joke if they don't want to. I agree with everything else in your post; that final "can't you take a frat joke?" strikes me as defensive and unnecessary.
2Kawoomba11y
You're right, it was too carelessly formulated.
2jooyous11y
Will you fix it? =) Is there an established protocol for fixing these sorts of things?
2Manfred11y
The edit button? :P
1Kawoomba11y
Is that a protocol, strictly speaking? "Pressing the edit button" would be a protocol with only one action (not sufficient). Maybe there will be a policy post on this soon.
2Manfred11y
You're right, strictly speaking, the protocol would be TCPIP. :) (There is no mandatory or even authoritative social protocol for this situation. The typical behavior is editing and then putting an EDIT: brief explanation of edit, but just editing with no explanation is also fine, particularly if nobody's replied yet, or the edit is explained in child comments).
2Kawoomba11y
Well earlier today I clarified (euphemism for edited) a comment shortly after it was made, then found a reply that cited the old, unclarified version. You know what that looks like, once the tribe finds out? OhgodImdone. In a hushed voice I just found out that EY can edit his comments without an asterisk appearing.
2earthwormchuck16311y
Why not stay around and try to help fix the problem?

Ordinarily I'd leave this for SamLL to respond to, but I'd say the chances of getting a response in this context are fairly low, so hopefully it won't be too presumptuous for me to speculate.

First of all, we as a community suck at handling gender issues without bias. The reasons for this could span several top-level posts and in any case I'm not sure of all the details; but I think a big one is the unusually blurry lines between research and activism in that field and consequent lack of a good outside view to fall back on. I don't think we're methodologically incapable of overcoming that, but I do think that any serious attempt at doing so would essentially convert this site into a gender blog.

To make matters worse, for one inclined to view issues through the lens of gender politics, Failed Utopia 4-2 is close to the worst starting point this site has to offer. Never mind the explicitly negative framing, or its place within the fun theory sequence: we have here a story that literally places men on Mars on gender-essentialist grounds, and doesn't even mention nonstandard genders or sexual preferences. No, that's not meant to be taken all that seriously or to inform people's real... (read more)

2A1987dM11y
As far as I can tell, we as a species suck at handling gender issues without bias, the closest thing to an exception to that I recall seeing being some (not all) articles (but usually not the comments) on the Good Men Project and the discussions on Yvain's “The $Nth Meditation on $Thing” blog post series.
3Nornagest11y
Yeah, I was fairly impressed with Yvain's posts on the subject; if we did want to devote some serious effort to tackling this issue, I can think of far worse starting points.
2shminux11y
s/gender// Though I think that this particular forum sucks less at handling at least some issues.
8wedrifid11y
Fixing the problem needs less people with a highly polarizing agenda, not more.

Hello! I'm David.

I'm 26 (at the time of writing), male, and an IT professional. I have three (soon to be four) children, three (but not four) of which have a different dad.

My immediate links here were through the Singularity Institute and Harry Potter and the Methods of Rationality, which drove me here when I realized the connection (I came to those things entirely separately!). When I came across this site, I had read through the Wikipedia list of biases several times over the course of years, come to many conscious conclusions about the fragility of my own cognition, and had innumerable arguments with friends and family that changed minds, but I never really considered that there would be a large community of people that got together on those grounds.

I'm going to do the short version of my origin story here, since writing it all out seems both daunting and pretentious. I was raised rich and lucky by an entrepreneur/university professor/doctor father and a mother who always had to be learning something or go crazy (she did some of both). I dropped out of a physics major in college and got my degree in gunsmithing instead, but only after I worked a few years. Along the way, I've p... (read more)

Hi LWers,

I am Robert and I am going to change the world. Maybe just a little bit, but that’s ok, since it’s fun to do and there’s nothing else I need to do right now. (Yay for mini-retirements!)

I find some of the articles here on LW very useful, especially those on heuristics and biases, as well as material on self-improvement although I find it quite scattered among loads of way to theoretic stuff. Does it seem odd that I have learned much more useful tricks and gained more insight from reading HPMOR than from reading 30 to 50 high-rated and “foundational” articles on this site? I am sincerely sad that even the leading rationalists on LW seem to struggle getting actual benefits out of their special skills and special knowledge (Yvain: Rationality is not that great; Eliezer: Why aren't "rationalists" surrounded by a visible aura of formidability?) and I would like to help them change that.

My interest is mainly in contributing more structured, useful content and also to band together with fellow LWers to practice and apply our rationalist skills. As a stretch goal I think that we could pick someone really evil as our enemy and take them down, just to show our superiority.... (read more)

5John_Maxwell11y
Welcome! Because they don't project high status with their body language? Re: Taking out someone evil. Let's be rational about this. Do we want to get press? Will taking them out even be worthwhile? What sort of benefits from testing ideas against reality can we expect? I think humans who study rationality might be better than other humans at avoiding certain basic mistakes. But that doesn't mean that the study of rationality (as it currently exists) amounts to a "success spray" that you can spray on any goal to make it more achievable. Also, if the recent survey is to be believed, the average IQ at Less Wrong is very high. So if LW does accomplish something, it could very well be due to being smart rather than having read a bunch about rationality. (Sometimes I wonder if I like LW mainly because it seems to have so many smart people.)
0Peterdjones11y
Some lessWrongians believe it is
0John_Maxwell11y
That comment doesn't rule out selection effects, e.g. the IQ thing I mentioned.
0Peterdjones11y
IQ without study will not make you are super philosopher or super anything else.
-1MugaSofer11y
Don't be too pessimistic to the newcomer, John. We're not completely useless. It doesn't grant any new abilities as such, admittedly, but if you're interested in making the right decision, then rationality is quite useful; to the extent that choosing correctly can help you, then this is place to be. Of course, how much the right choices can help you varies a bit, but it's hard to know how much you could achieve if you're biased, isn't it?
0John_Maxwell11y
Hm. My correction on that would be: To the extent that your native decisionmaking mechanisms are broken and can be fixed by reading blog posts on Less Wrong, then this is the place to be. In other words, how useful the study of rationality is depends on how important and easily beaten the bugs Less Wrong tries to fix in human brains are. Many people are interested in techniques for becoming more successful and getting more out of life. Techniques range from reading The Secret to doing mindfulness meditation to reading Less Wrong. I don't see any a priori reason to believe that the ROI from reading Less Wrong is substantially higher than other methods. (Though, come to think of it, self-improvement guru Sebastian Marshall gives LW a rave review. So in practice LW might work pretty well, but I don't think that is the sort of thing you can derive from first principles, it's really something that you determine through empirical investigation.)
3OrphanWilde11y
I'm evil by some people's standards. You'll have to get a little bit more specific about what you think constitutes evil. From what I've seen, real evil tends to be petty. Most grand atrocities are committed by people who are simply incorrect about what the right thing to do is.
2shminux11y
You may follow HJPEV in calling world domination "world optimization", but running on some highly unreliable wetware means that grand projects tend to become evil despite best intentions, due to snowballing unforeseen ramifications. In other words, your approach seems to be lacking wisdom.
2Jonathan Paulson11y
You seem to be making a fully general argument against action.
1shminux11y
Against any sweeping action without carefully considering and trying out incremental steps.
0RobertChange11y
Thanks to all for the warm welcome and the many curious questions about my ambition! And special thanks to MugaSofer, Peterdjones, and jpaulsen for your argumentative support. I am very busy writing right now, and I hope that my posts will answer most of the initial questions. So I’ll rather use the space here to write a little more about myself. I grew up a true Ravenclaw, but after grad school I discovered that Hufflepuff’s modesty and cheering industry also have their benefits when it comes to my own happiness. HPMOR made me discover my inner Slytherin because I realized that Ravenclaw knowledge and Hufflepuff goodness do not suffice to bring about great achievements. The word “ambition” in the first line of the comment is therefore meant in professor Quirrell’s sense. I also have a deep respect for the principles of Gryffindor’s group (of which the names of A. Swartz and J. Assange have recently caught much mainstream attention), but I can’t find anything of that spirit in myself. If I have ever appeared to be a hero, it was because I accidentally knew something that was of help to someone. @shminux: I love incremental steps and try to incorporate them into any of my planning and acting! My mini-retirement is actually such a step that, if successful, I’d like to repeat and expand. @John_Maxwell_IV: Yay for empirical testing of rationality! @OrphanWilde: “Don't be frightened, don't be sad, We'll only hurt you if you're bad.“ Or to put it into more utilitarian terms: If you are in the way of my ambition, for instance if I would have to hurt your feelings to accomplish any of my goals for the greater good, I would not hesitate to do what has to be done. All I want is to help people to be happy and to achieve their goals, whatever they are. And you’ll probably all understand that I might give a slight preference to helping people whose goals align with mine. ;-) May you all be happy and healthy, may you be free from stress and anxiety, and may you achieve your
0Kawoomba11y
Anything more specific you have in mind?

Greetings. I am Error.

I think I originally found the place through a comment link on ESR's blog. I'm a geek, a gamer, a sysadmin, and a hobbyist programmer. I hesitate to identify with the label "rationalist"; much like the traditional meaning of "hacker", it feels like something someone else should say of me, rather than something I should prematurely claim for myself.

I've been working through the Sequences for about a year, off and on. I'm now most of the way through Metaethics. It's been a slow but rewarding journey, and I think the best thing I've taken out of it is the ability to identify bogus thoughts as they happen. (Identifying is not always the same as correcting them, unfortunately) Another benefit, not specifically from the sequences but from link-chasing, is the realization that successful mental self-engineering is possible; I think the tipping point for me there was Alicorn's post about polyhacking. The realization inspired me to try and beat the tar out of my akrasia, and I've done fairly well so far.

My current interests center around "updating efficiently." I just turned 30; I burnt my 20s establishing a living instead of learning all... (read more)

2NancyLebovitz12y
Welcome! It's acceptable and welcome to comment in the Sequences. The Recent Comments feature (link on the right sidebar, with distinct Recent Comments for the Main section and for the Discussion section) mean that there's a chance that new comments on old threads will get noticed.
2shokwave12y
Welcome! Commenting on the Sequences isn't against any rules. You stand a chance of getting responses from who watch the Recent Comments. However, in Discussion you'll see [SEQ RERUN] posts (which are bringing up old posts in the Sequences in chronological order) that encourage comments on the rerun, not the original. If you happen to be reading a post that's been recently re-run, you might get a better response in the rerun thread.

Hey everyone,

As I continue to work through the sequences, I've decided to go ahead and join the forums here. A lot of the rationality material isn't conceptually new to me, although much of the language is very much so, and thus far I've found it to be exceptionally helpful to my thinking.

I'm a 24 year old video game developer, having worked on graphics on a particular big-name franchise for a couple years now. It's quite the interesting job, and is definitely one of the realms I find the heady, abstract rationality tools to be extremely helpful. Rationality is what it is, and that seems to be acknowledged here, a fact I'm quite grateful for.

When I'm not discussing the down-to-earth topics here, people may find I have a sometimes anxiety-ridden attachment to certain religious ideas. Religious discussion has been extremely normal for me throughout my life, so while the discussion doesn't make me uncomfortable, my inability to come to answers that I'm happy with does, and has caused me a bit of turmoil outside of discussion. Obviously there is much to say about this, and much people may like to say to me, but I'd like to first get through all the sequences, get all of my quest... (read more)

1Vaniver11y
Welcome! Glad to see you here. :D

I've been commenting for a few months now, but never introduced myself in the prior Welcome threads. Here goes: Student, electrical engineering / physics (might switch to math this fall), female, DC area.

I encountered LW when I was first linked to Methods a couple years ago, but found the Sequences annoying and unilluminating (after having taken basic psych and stats courses). After meeting a couple of LWers in real life, including my now-boyfriend Roger (LessWrong is almost certainly a significant part of the reason we are dating, incidentally), I was motivated to go back and take a look, and found some things I'd missed: mostly, reductionism and the implications of having an Occam prior. This was surprising to me; after being brought up as an anti-religious nut, then becoming a meta-contrarian in order to rebel against my parents, I thought I had it all figured out, and was surprised to discover that I still had attachments to mysticism and agnosticism that didn't really make any sense.

My biggest instrumental rationality challenge these days seems to be figuring out what I really want out of life. Also, dealing with an out-of-control status obsession.

To cover some typical LW clus... (read more)

1TheOtherDave12y
I'm not quite sure what you're referring to by "the prominent belief patterns," but neither low confidence that signing up for cryonics results in life extension, nor low confidence that AI research increases existential risk, are especially uncommon here. That said, high confidence in those things is far more common here than elsewhere.
1maia12y
That is more or less what I am trying to say. It's just that I've noticed several people on Welcome threads saying things like, "Unlike many LessWrongers, I don't think cryonics is a good idea / am not concerned about AI risk."

Hi, I'm Edward and have been reading the occasional article on here for a while. I've finally decided to officially join as this year I'm starting to do more work on my knowledge and education (especially maths & science) and I like the thoughtful community I see here. I'm a programmer, but also have a passion for history. Just as I was finishing university, my thinking led me to abandon the family religion (many of my friends are still theists). I was going to keep thinking and exploring ideas but I ended up just living - now I want to begin thinking again.

Regards, Edward

I'm Abd ul-Rahman Lomax, introducing myself. I have six grandchildren, from five biological children, and I have two adopted girls, age 11 from China, and age 9 from Ethiopia.

Born in 1944, Abd ul-Rahman is not my birth name, I accepted Islam in 1970. Not being willing to accept pale substitutes, I learned to read the Qur'an in Arabic by reading the Qur'an in Arabic.

Back to my teenage years, I was at Cal Tech for a couple of years, being in Richard P. Feynman's two years of undergraduate physics classes, the ones made into the textbook. I had Linus Pauling for freshman chemistry, as well. Both of them helped create how I think.

I left Cal Tech to pursue a realm other than "science," but was always interested in direct experience rather than becoming stuffed with tradition, though I later came to respect tradition (and memorization) far more than at the outset. I became a leader of a "spiritual community," and a successor to a well-known teacher, Samuel L. Lewis, but was led to pursue many other interests.

I delivered babies (starting with my own) and founded a school of midwifery that trained midwives for licensing in Arizona.

Self-taught, I started an electronics d... (read more)

9Nisan11y
Welcome! That's a fascinating biography. I have been to one introductory Landmark seminar and wrote about the experience here.

Hello. I was brought here by HPMOR, which I finished reading today. Back in 1999 or something I found the site called sysopmind.com which had interesting reads on AI, Bayes theorem (that I didn't understand) and 12 virtues of rationality. I loved it for the beauty that reminded me of Asimov. I kept it in my bookmarks forever. (I knew him before he was famous? ;-))

I like SF (I have read many SF books but most were from before 1990 for some reason) and I'm a computer nerd, among other things. I want to learn everything, but I have a hard time putting in the work. I study to become a psychologist, scheduled to finish in 2013. My favorite area of psychology is social psychology, especially how humans make decisions, how humans are influenced by biases or norms or high status people. I'm married and have a daughter born in 2011.

I like to watch tv-shows, but I have high standards. It is SF if it is based in science and rationality, otherwise it's just space drama/space action and I have no patience for it. I also like psychological drama, but it has to be realistic and believable. Please give recommendations if you like. (edited:) Also, someone could explain in what way Star Trek, Babylon 5 or Battlestar Galactica is really SF or Buffy is feminist, so I know if they are worth my while.

1CCC11y
Of those, the only one I've seen is Star Trek. They can be a bit handwavey about the science sometimes; I liked it, but if you're looking for hard science then you might not. As far as recommendations go, may I recommend the Chanur series (books, not TV) by one C.J. Cherryh?
1Alejandro111y
For realistic psychological drama, I haven't seen any show that beats Mad Men.
0shminux11y
Not without knowing you well enough. Sherlock), on the other hand should suit you just fine.
1kaneleh11y
Ah, yes, thank you. I have seen Sherlock and loved it. Too few episodes though! =)

I highly doubt that I'll be posting articles or even joining discussions anytime soon, since right now, I'm just getting started on reading the sequences and exploring other parts of the site, and don't feel prepared yet to get involved in discussions. However, I'll probably comment on things now and then, so because of that (and, honestly, just because I'm a very social person), I figured I might as well post an introduction here.

I appreciate the way that discussions are described as ending on here, because I've noticed in other debates that "tapping out" is seen as running away, and the main trait that gives me problems in my quest for rationality is that I'm inherently a competitive person, and get more caught up in the idea of "winning" than of improving my thinking. I'm working on this, but if I do get involved in discussions, the fact that they aren't seen as much as competitions here compared to other places should be helpful to me.

Anyway, I guess I'll introduce myself. I'm Alexandra, and I'm a seventeen year old high-school student in the United States (I applied to the camp in August, but I never received any news about it, so I assume that I wasn't acc... (read more)

2Bugmaster12y
I'm not affiliated with SIAI or the summer camps in any way, but IMO this sounds like a breakdown somewhere in the organization's communication protocols. If I were you, I wouldn't just assume that I wasn't accepted, I would ask for an explanation.
1candyfromastranger12y
I'll contact them, then. I wasn't expecting to be accepted, but on the off chance that I was, it's hopefully not too late.
1hannahelisabeth11y
I like your description of yourself. You remind me a bit of myself, actually. I think I'd enjoy conversing with you. Though I have nothing on my mind at the moment that I feel like discussing. Hm, I kind of feel like my comment ought to have a bit more content than "you seem interesting" but that's really all I've got.

Hellow Lesswrong! (I posted this in the other July2012 welcome thread aswell. :P Though apparently it has too many comments at this point or something to that effect.)

My name is Ryan and I am a 22 year old technical artist in the Video Game industry. I recently graduated with honors from the Visual Effects program at Savannah College of Art and Design. For those who don't know much about the industry I am in, my skill set is somewhere between a software programmer, a 3D artist, and a video editor. I write code to create tools to speed up workflows for the 3D things I or others need to do to make a game, or cinematic.

Now I found lesswrong.com through the Harry Potter and the Methods of Rationality podcast. Up unto that point I had never heard of Rationalism as a current state of being... so far I greatly resonate with the goals and lessons that have come up in the podcast, and what I have seen about rationalism. I am excited to learn more.

I wouldn't go so far to claim the label for myself as of yet, as I don't know enough and I don't particularly like labels for the most part. I also know that I have several biases, I feel like I know the reasons and causes for most, but I have not ... (read more)

3Grognor12y
I disagree with this claim. If you are capable of understanding concepts like the Generalized Anti-Zombie Principle, you are more than capable of recognizing that there is no god and that that hypothesis wouldn't even be noticeable for a bounded intelligence unless a bunch of other people had already privileged it thanks to anthropomorphism. Also, please don't call what we do here, "rationalism". Call it "rationality".
2Emile12y
Welcome to LessWrong! There are a few of us here in the Game Industry, and a few more that like making games in their free time. I also played around with Houdini, though never produced anything worth showing.
0Gaviteros12y
Thanks for the welcome! Houdini can be a lot of fun- but without a real goal it is almost too open for anything of value to be easily made. Messing around in Houdini is a time sink without a plan. :) That said, I absolutely love it as a program.

Hello,

My name is Trent Fowler. I started leaning toward scientific and rational thinking while still a child, thanks in part to a variety of aphorisms my father was fond of saying. Things like "think for yourself" and "question your own beliefs" are too general to be very useful in particular circumstances, but were instrumental in fostering in me a skepticism and respect for good argument that has persisted all my life (I'm 23 as of this writing). These tools are what allowed me to abandon the religion I was brought up in as a child, and to eventually begin salvaging the bits of it that are worth salvaging. Like many atheists, when I first dropped religion I dropped every last thing associated with it. I've since grown to appreciate practices like meditation, ritual, and even outright mysticism as techniques which are valuable and pursuable in a secular context.

What I've just described is basically the rationality equivalent of lifting weights twice a week and going for a brisk walk in the mornings. It's great for a beginner, but anyone who sticks with it long enough will start to get a glimpse of what's achievable by systematizing training and ramping ... (read more)

I am Yan Zhang, a mathematics grad student specializing in combinatorics at MIT (and soon to work at UC Berkeley after graduation) and co-founder of Vivana.com. I was involved with building the first year of SPARC. There, I met many cool people at CFAR, for which I'm now a curicculum consultant.

I don't know much about LW but have liked some of the things I have read here; AnnaSalamon described me as a "street rationalist" because my own rationality principles are home-grown from a mix of other communities and hobbies. In that sense, I'm happy to step foot into this "mainstream dojo" and learn your language.

Recently Anna suggested I may want to cross-post something I wrote to LW and I've always wanted to get to know the community better, so this is the first step, I suppose. I look forward to learning from all of you.

2Qiaochu_Yuan11y
Welcome! It's good to see you here.
0krzhang11y
Haha hey QC. Remind me sometime to learn the "get ridiculously high points in karma-based communities and learn a lot" metaskill from you... you seem to be off to a good start here too ;)
3Qiaochu_Yuan11y
Step 1 is to spend too much time posting comments. I'm not sure I recommend this to someone whose time is valuable. I would like to see you share your "street rationalist" skills here, though!

Hi,

My name is Hannah. I'm an American living in Oslo, Norway (my husband is Norwegian). I am 24 (soon to be 25) years old. I am currently unemployed, but I have a bachelor's degree in Psychology from Truman State University. My intention is to find a job working at a day care, at least until I have children of my own. When that happens, I intend to be a stay-at-home mother and homeschool my children. Anything beyond that is too far into the future to be worth trying to figure out at this point in my life.

I was referred to LessWrong by some German guy on OkCupid. I don't know his name or who he is or anything about him, really, and I don't know why he messaged me randomly. I suppose something in my profile seemed to indicate that I might like it here or might already be familiar with it, and that sparked his interest. I really can't say. I just got a message asking if I was familiar with LessWrong or Harry Potter and the Methods of Rationality (which I was not), and if so, what I thought of them. So I decided to check them out. I thought the HP fanfiction was excellent, and I've been reading through some of the major series here for the past week or so. At one point I had a comment ... (read more)

2Morendil11y
Welcome here!
[-][anonymous]12y150

Hello LW,

Last Thursday, I was asked by User:rocurley if, in his absence, I wanted to organize a hiking event (originally my idea) for this week's DC metro area meetup, during which I discovered I could not make posts, etc. here because I had zero karma. I chose to cancel the meetup on account of weather. I had registered my account previously, but realizing that I might have need to post here in the future, and that I had next to nothing to lose, I have decided to introduce myself finally.

I discovered LW through HPMOR, through Tvtropes, I believe. I've read some LW articles, but not others. Areas of interest include sciences (I have a BS in physics), psychology, personality disorders, some areas of philosophy, reading, and generally learning new things. One of my favorite books (if not /the/ favorite) is Godel, Escher, Bach, which I read for the first (and certainly not last) time while I was in college, 5+ years ago.

I'm extremely introverted, and I am aware that I have certain anxiety issues; while rationality has not helped with the actual feeling of anxiety, it has allowed me to push through it, in some cases.

2Vaniver12y
Welcome! Specific! :P Which is the most interesting one you've read so far? We might have recommendations of similar ones that you would like. So, I found my introversion much easier to manage when I started scheduling time by myself to recharge, and scheduling infrequent social events to make sure I didn't get into too much of a cave. It had been easy to get overwhelmed with social events near each other if I didn't have something on my calendar reminding me "you'll want to read a book by yourself for a few hours before you go to another event." That sort of thing might be helpful to consider.
2[anonymous]12y
Some of my favorite articles, off the top of my head (and a bit of browsing) : * A Fable of Science and Politics * Explain, Worship, Ignore - I am, as of now, something of a naturalistic pantheist / pandeist; if you've heard Carl Sagan or Neil Degrasse Tyson speak on the wonder that is the existence of the universe, it's something like that. Unlike what is written in the linked article, however, I'm not convinced that the initial singularity, or whatever cause the Big Bang might have, can be explained by science. (Is it even meaningful to ask questions about what is outside the universe?) * Belief in Belief * Avoiding Your Belief's Real Weak Points * The 'Outside the Box' Box - How much of my belief system is actually a result of my own thinking, as opposed to a result of culture, society, etc? Granted, sometimes collective wisdom is better than what one might come up with by oneself, but not always... I have Meetup.com to organize and schedule social events, and of course there's the LW meetups. I get plenty of alone time, so that isn't really a problem for me. (Some minutes of thinking later) The particular issues aren't something I can accurately put into words, but they're something like 'active avoidance of (perceived) excessive attention or expectations, either positive or negative' and 'fear of exposing "personal" info I'd rather not share, and of any negative consequences that might result'. Perhaps not surprisingly, I greatly prefer internet or written "non-personal" communication over verbal communication.

I got into a community of intelligent, creative free-thinkers by reading fan fiction of all things.

You know the one.

Anyway, my knowledge of what is collectively referred to as Rationality is slim. I read the first 6 pages of The Sequences, felt like I was cheating on a test, and stopped. I'll try to make up for it with some of the most unnecessarily theatrical and hammy writing I can get away with.

I love word play, and over the course of a year I offered (as a way of apology) to owe my friend a quarter for every time I improvised a pun or awful joke mid-conversation, by the end of which I could have bought a dinner for him at Pizza Delight- I didn't. It's on my to-do list to compile all the wises that Carlos Ramon ever cracked on The Magic School Bus and put it on you tube, because no one else has and it needs to be done, damn it. As you can tell, I sometimes write for it's own sake, sort of a literary hedonist if you will. But all good things must come to an end...

My greatest principle is that a person's course in life is governed by their reaction to their circumstance, and that nothing at all is of certainty. The nature of the human mind is a process which our current metaphors... (read more)

0Rukifellth12y
Also, I enjoy playing Superman 64's ring levels.

Hello. I am from Istanbul, Turkey (A Turkish Citizen born and raised). I came across LessWrong on a popular Turkish website called EkşiSözlük. Since then, this is the place I checked to see what's new when there's nothing worth reading on Google Reader and I have time. (So long posts you have!)

I am 31 years old and I have a BSc in Computer Science and MSc in Computational Sciences (Research on Bioinformatics). But then, like most of the people in my country does, I've landed upon a job where I can't utilize any of these information. Information Security :)

Why did I complain about my job? Here is why:

I've been long since looking for "the best way to have lived a life". What I mean by this is, I have to say, at the moment of death "I lived my life the best way I could, and I can die blissfully". This may come off a bit cliché but bear in mind that I'm relatively new to this rationality thing.

While I was learning Computer Science for the first time, I saw there was great opportunity in relating computational sciences to social sciences so as to understand inner workings of human beings. This I realised when the Law&Ethics instructor asked us to write an essay o... (read more)

5NancyLebovitz11y
However, you can estimate how long you will live with fairly good accuracy. If you know you're very likely to live for some decades more, then I think it makes sense to optimize around the estimate rather than for the very small possibility that you'll be dead in the next hour.
3NotInventedHere11y
This is an extremely belated reply, but with regards to The Fun Theory and Metaethics sequences helped me through my personal period of existential angst. The two most immediately helpful posts I would recommend for someone like you are Joy in The Merely Real and Joy in the Merely Good.
[-][anonymous]11y140

Hello. I've read sequence articles and discussion off this website for a while now. Been hesitant to join before because I like to keep my identity small but recently realized that being able to talk to others about topics on this site will make me more effective at reaching my goals.

Armchairs are very comfortable and I'm having some mental difficulty putting the effort into the practice of achieving set goals. It's very hard to actually do stuff and easy to just read about interesting topics without engaging.

I'm interested more in meta-ethics than in physics, more in decision theory than practical AI. My first comments will likely be in the sequences or in discussion comments of a few specific natures.

This should be fun, I look forward to talking with you. Ask me any questions that arouse your curiosity.

The browsing experience with Kibitzing off is strange but not unpleasant. How long did it take for you to get accustomed to it?

Hi, I'm Liz.

I'm a senior at a college in the US, soon to graduate with a double major in physics and economics, and then (hopefully) pursue a PhD in economics. I like computer science and math too. I'm hoping to do research in economic development, but more relevantly to LW, I'm pretty interested in behavioral economics and in econometrics (statistics). Out of the uncommon beliefs I hold, the one that most affects my life is that since I can greatly help others at a small cost to myself, I should; I donate whatever extra money I have to charity, although it's not much. (see givingwhatwecan.org)

I think I started behaving as a rationalist (without that word) when I became an atheist near the end of high school. But to rewind...

I was raised Christian, but Christianity was always more of a miserable duty than a comfort to me. I disliked the music and the long services and the awkward social interactions. I became an atheist for no good reason in the beginning of high school, but being an atheist was terrible. There was no one to forgive me when I screwed up, or pray to when the world was unbearably awful. My lack of faith made my father sad. Then, lying in bed and angsting about free ... (read more)

3John_Maxwell11y
Welcome to LW. Also not an expert on Newcomb's Problem, but I'm a one-boxer because I choose to have part of my brain say that I'm a one-boxer, and have that part of my brain influence my behavior if I get in to a Newcomb-like situation. Does that make any sense? Basically, I'm choosing to modify my decision algorithm so I no longer maximize expected value because I think having this other algorithm will get me better results.
0Desrtopa11y
To be properly isomorphic to the Newcomb's problem, the chance of the predictor being wrong should approximate to zero. If I thought that the chance of my friend's mother being wrong approximated to zero, I would of course choose to one-box. If I expected her to be an imperfect predictor who assumed I would behave as if I were in the real Newcomb's problem with a perfect predictor, then I would choose to two-box. In Newcomb's Problem, if you choose on the basis of which choice is consistent with a higher expected return, then you would choose to one-box. You know that your choice doesn't cause the box to be filled, but given the knowledge that whether the money is in the box or not is contingent on a perfect predictor's assessment of whether or not you were likely to one-box, you should assign different probabilities to the box containing the money depending on whether you one-box or two-box. Since your own mental disposition is evidence of whether the money is in the box or not, you can behave as if the contents were determined by your choice.
0findis11y
Hm, I think I still don't understand the one-box perspective, then. Are you saying that if the predictor is wrong with probability p, you would take two-boxes for high p and one box for a sufficiently small p (or just for p=0)? What changes as p shrinks? Or what if Omega/Ann's mom is a perfect predictor, but for a random 1% of the time decides to fill the boxes as if it made the opposite prediction, just to mess with you? If you one-box for p=0, you should believe that taking one box is correct (and generates $1 million more) in 99% of cases and that two boxes is correct (and generates $1000 more) in 1% of cases. So taking one box should still have a far higher expected value. But the perfect predictor who sometimes pretends to be wrong behaves exactly the same as an imperfect predictor who is wrong 1% of the time.
0Desrtopa11y
You choose the boxes according to the expected value of each box choice. For a 99% accurate predictor, the expected value of one-boxing is $990,000,000 (you get a billion 99% of the time, and nothing 1% of the time,) while the expected value of two-boxing is $10,001,000 (you get a thousand 99% of the time, and one billion and one thousand 1% of the time.) The difference between this scenario and the one you posited before, where Ann's mom makes her prediction by reading your philosophy essays, is that she's presumably predicting on the basis of how she would expect you to choose if you were playing Omega. If you're playing against an agent who you know will fill the boxes according to how you would choose if you were playing Omega (we'll call it Omega-1,) then you should always two-box (if you would one-box against Omega, both boxes will contain money, so you get the contents of both. If you would two-box against Omega, only one box would contain money, and if you one-box you'll get the empty one.) An imperfect predictor with random error is a different proposition from an imperfect predictor with nonrandom error. Of course, if I were dealing with this dilemma in real life, my choice would be heavily influenced by considerations such as how likely it is that Ann's mom really has billions of dollars to give away.
0findis11y
Ok, but what if Ann's mom is right 99% of the time about how you would choose when playing her? I agree that one-boxers make more money, with the numbers you used, but I don't think that those are the appropriate expected values to consider. Conditional on the fact that the boxes have already been filled, two-boxing has a $1000 higher expected value. If I know only one box is filled, I should take both. If I know both boxes are filled, I should take both. If I know I'm in one of those situations but not sure of which it is, I should still take both. Another analogous situation would be that you walk into an exam, and the professor (who is a perfect or near-perfect predictor) announces that he has written down a list of people whom he has predicted will get fewer than half the questions right. If you are on that list, he will add 100 points to your score at the end. The people who get fewer than half of the questions right get higher scores, but you should still try to get questions right on the test... right? If not, does the answer change if the professor posts the list on the board? I still think I'm missing something, since a lot of people have thought carefully about this and come to a different conclusion from me, but I'm still not sure what it is. :/
1ArisKatsaris11y
You are focusing too much on the "already have been filled", as if the particular time of your particular decision is relevant. But if your decision isn't random (and yours isn't), then any individual decision is dependent on the decision algorithm you follow -- and can be calculated in exactly the same manner, regardless of time. Therefore in a sense your decision has been made BEFORE the filling of the boxes, and can affect their contents. You may consider it easier to wrap your head around this if you think of the boxes being filled according to what result the decision theory you currently have would return in the situation, instead of what decision you'll make in the future. That helps keep in mind that causality still travels only one direction, but that a good predictor simply knows the decision you'll make before you make it and can act accordingly.
-2Desrtopa11y
I would one-box. I gave the relevant numbers on this in my previous comment; one-boxing has an expected value of $990,000,000 to the expected $10,001,000 if you two-box. When you're dealing with a problem involving an effective predictor of your own mental processes (it's not necessary for such a predictor to be perfect for this reasoning to become salient, it just makes the problems simpler,) your expectation of what the predictor will do or already have done will be at least partly dependent on what you intend to do yourself. You know that either the opaque box is filled, or it is not, but the probability you assign to the box being filled depends on whether you intend to open it or not. Let's try a somewhat different scenario. Suppose I have a time machine that allows me to travel back a day in the past. Doing so creates a stable time loop, like the time turners in Harry Potter or HPMoR (on a side note, our current models of relativity suggest that such loops are possible, if very difficult to contrive.) You're angry at me because I've insulted your hypothetical scenario, and are considering hitting me in retaliation. But you happen to know that I retaliate against people who hit me by going back in time and stealing from them, which I always get away with due to having perfect alibis (the police don't believe in my time machine.) You do not know whether I've stolen from you or not, but if I have, it's already happened. You would feel satisfied by hitting me, but it's not worth being stolen from. Do you choose to hit me or not? If the professor is a perfect predictor, then I would deliberately get most of the problems wrong, thereby all but guaranteeing a score of over 100 points. I would have to be very confident that I would get a score below fifty even if I weren't trying to on purpose before trying to get all the questions right would give me a higher expected score than trying to get most of the questions wrong. If the professor posts the list on the boa
1wedrifid11y
I believe you are making a mistake. Specifically, you are implementing a decision algorithm that ensures that "you lose" is a correct self fulfilling prophecy (in fact you ensure that it is the only valid prediction he could make). I would throw the test (score in the 40s) even when my name is not on the list. Do you also two box on Transparent Newcomb's?
0Desrtopa11y
If I were in a position to predict that this were the sort of thing the professor might do, then I would precommit to throwing the test should he implement such a procedure. But you could just as easily end up with the perfect predictor professor saying that in the scoring for this test, he will automatically fail anyone he predicts would throw the test in the previously described scenario. I don't think there's any point in time where making such a precommitment would have positive expected value. By the time I know it would have been useful, it's already too late. Edit: I think I was mistaken about what problem you were referring to. If I'm understanding the question correctly, yes I would, because until the scenario actually occurs I have no reason to suspect any precommitment I make is likely to bring about more favorable results. For any precommitment I could make, the scenario could always be inverted to punish that precommitment, so I'd just do what has the highest expected utility at the time at which I'm presented with the scenario. It would be different if my probability distribution on what precommitments would be useful weren't totally flat.
4Desrtopa11y
As an aside, I'll note that a lot of the solutions bandied around here to decision theory problems remind me of something from Magic: The Gathering which I took notice of back when I still followed it. When I watched my friends play, one would frequently respond to another's play with "Before you do that, I-" and use some card or ability to counter their opponent's move. The rules of MTG let you do that sort of thing, but I always thought it was pretty silly, because they did not, in fact, have any idea that it would make sense to make that play until after seeing their opponent's move. Once they see their opponent's play, they get to retroactively decide what to do "before" their opponent can do it. In real life, we don't have that sort of privilege. If you're in a Counterfactual Mugging scenario, for instance, you might be inclined to say "I ought to be the sort of person who would pay Omega, because if the coin had come up the other way, I would be making a lot of money now, so being that sort of person would have positive expected utility for this scenario." But this is "Before you do that-" type reasoning. You could just as easily have ended up in a situation where Omega comes and tells you "I decided that if you were the sort of person who would not pay up in a Counterfactual Mugging scenario, I would give you a million dollars, but I've predicted that you would, so you get nothing." When you come up with a solution to an Omega-type problem involving some type of precommitment, it's worth asking "would this precommitment have made sense when I was in a position of not knowing Omega existed, or having any idea what it would do even if it did exist?" In real life, we sometimes have to make decisions dealing with agents who have some degree of predictive power with respect to our thought processes, but their motivations are generally not as arbitrary as those attributed to Omega in most hypotheticals.
0TheOtherDave11y
Can you give a specific example of a bandied-around solution to a decision-theory problem where predictive power is necessary in order to implement that solution? I suspect I disagree with you here -- or, rather, I agree with the general principle you've articulated, but I suspect I disagree that it's especially relevant to anything local -- but it's difficult to be sure without specifics. With respect to the Counterfactual Mugging you reference in passing, for example, it seems enough to say "I ought to be the sort of person who would do whatever gets me positive expected utility"; I don't have to specifically commit to pay or not pay. Isn't it? But perhaps I've misunderstood the solution you're rejecting.
2Desrtopa11y
Well, if your decision theory tells you you ought to be the sort of person who would pay up in a Counterfactual Mugging, because that gets you positive utility, then you could end up in with Omega coming and saying "I would have given you a million dollars if your decision theory said not to pay out in a counterfactual mugging, but since you would, you don't get anything." When you know nothing about Omega, I don't think there's any positive expected utility in choosing to be the sort of person who would have positive expected utility in a Counterfactual Mugging scenario, because you have no reason to suspect it's more likely than the inverted scenario where being that sort of person will get you negative utility. The probability distribution is flat, so the utilities cancel out. Say Omega comes to you with a Counterfactual Mugging on Day 1. On Day 0, would you want to be the sort of person who pays out in a Counterfactual Mugging? No, because the probabilities of it being useful or harmful cancel out. On Day 1, when given the dilemma, do you want to be the sort of person who pays out in a Counterfactual Mugging? No, because now it only costs you money and you get nothing out of it. So there's no point in time where deciding "I should be the sort of person who pays out in a Counterfactual Mugging" has positive expected utility. Reasoning this way means, of course, that you don't get the money in a situation where Omega would only pay you if it predicted you would pay up, but you do get the money in situations where Omega pays out only if you wouldn't pay out. The latter possibility seems less salient from the "before you do that-" standpoint of a person contemplating a Counterfactual Mugging, but there's no reason to assign it a lower probability before the fact. The best you can do is choose according to whatever has the highest expected utility at any given time. Omega could also come and tell me "I decided that I would steal all your money if you hit the S k
0TheOtherDave11y
Sure, I agree. What I'm suggesting is that "I should be the sort of person who does the thing that has positive expected utility" causes me to pay out in a Counterfactual Mugging, and causes me to not pay out in a Counterfactual Antimugging, without requiring any prophecy. And that as far as I know, this is representative of the locally bandied-around solutions to decision-theory problems. Is this not true? I agree that this is not something I can sensibly protect against. I'm not actually sure I would call it a decision theory problem at all.
2Desrtopa11y
In the inversion I suggested to the Counterfactual Mugging, your payout is determined on the basis of whether you pay up in the Counterfactual Mugging. In the Counterfactual Mugging, Omega predicts whether you would pay out in the Counterfactual Mugging, and if you would, you get a 50% shot at a million dollars. In the inverted scenario, Omega predicts whether you would pay out in the Counterfactual Mugging scenario, and if you wouldn't, you get a shot at a million dollars. Being the sort of person who would pay out in a Counterfactual Mugging only brings positive expected utility if you expect the Counterfactual Mugging scenario to be more likely than the inverted Counterfactual Mugging scenario. The inverted Counterfactual Mugging scenario, like the case where Omega rewards or punishes you based on your keyboard usage, isn't exactly a decision theory problem, in that once it arises, you don't get to make a decision, but it doesn't need to be. When the question is "should I be the sort of person who pays out in a Counterfactual Mugging?" if the chance of it being helpful is balanced out by an equal chance of it being harmful, then it doesn't matter whether the situations that balance it out require you to make decisions at all, only that the expected utilities balance. If you take as a premise "Omega simply doesn't do that sort of thing, it only provides decision theory dilemmas where the results are dependent on how you would respond in this particular dilemma," then our probability distribution is no longer flat, and being the sort of person who pays out in a Counterfactual Mugging scenario becomes utility maximizing. But this isn't a premise we can take for granted. Omega is already posited as an entity which can judge your decision algorithms perfectly, and imposes dilemmas which are highly arbitrary.
0wedrifid11y
You don't need a precommitment to make the correct choice. You just make it. That does happen to include one boxing on Transparent Newcomb's (and conventional Newcomb's, for the same reason). The 'but what if someone punishes me for being the kind of person who makes this choice' is a fully general excuse to not make rational choices. The reason why it is an invalid fully general excuse is because every scenario that can be contrived to result in 'bad for you' is one in which your rewards are determined by your behavior in an entirely different game to the one in question. For example your "inverted Transparent Newcomb's" gives you a bad outcome, but not because of your choice. It isn't anything to do with a decision because you don't get to make one. It is punishing you for your behavior in a completely different game.
-1Desrtopa11y
Could you describe the Transparent Newcomb's problem to me so I'm sure we're on the same page? "What if I face a scenario that punishes me for being the sort of person who makes this choice?" is not a fully general counterargument, it only applies in cases where the expected utilities of the scenarios cancel out. If you're the sort of person who won't honor promises made under duress, and other people are sufficiently effective judges to recognize this, then you avoid people placing you under duress to extract promises from you. But supposing you're captured by enemies in a war, and they say "We could let you go if you made some promises to help out our cause when you were free, but since we can't trust you to keep them, we're going to keep you locked up and torture you to make your country want to ransom you more." This doesn't make the expected utilities of "Keep promises made under duress" vs. "Do not keep promises made under duress" cancel out, because you have an abundance of information with respect to how relatively likely these situations are.
0wedrifid11y
Take a suitable description of Newcomb's problem (you know, with Omega and boxes). Then make the boxes transparent. That is the extent of the difference. I assert that being able to see the money makes no difference to whether one should one box or two box (and also that one should one box).
-2Desrtopa11y
Well, if you know advance that Omega is more likely to do this than it is to impose a dilemma where it will fill both boxes only if you two-box, then I'd agree that this is an appropriate solution. I think that if in advance you have a flat probability distribution for what sort of Omega scenarios might occur (Omega is just as likely to fill both boxes only if you would two-box in the first scenario as it is to fill both boxes only if you would one-box,) then this solution doesn't make sense. In the transparent Newcomb's problem, when both boxes are filled, does it benefit you to be the the sort of person who would one-box? No, because you get less money that way. If Omega is more likely to impose the transparent Newcomb's problem than its inversion, then prior to Omega foisting the problem on you, it does benefit you to be the sort of person who would one-box (and you can't change what sort of person you are mid-problem.) If Omega only presents transparent Newcomb's problems of the first sort, where the box containing more money is filled only if the person presented with the boxes would one-box, then situations where a person is presented with two transparent boxes of money and picks both will never arise. People who would one-box in the transparent Newcomb's problem come out ahead. If Omega is equally likely to present transparent Newcomb's problems of the first sort, or inversions where Omega fills both boxes only for people it predicts would two-box in problems of the first sort, then two-boxers come out ahead, because they're equally likely to get the contents of the box with more money, but always get the box with less money, while the one-boxers never do. You can always contrive scenarios to reward or punish any particular decision theory. The Transparent Newcomb's Problem rewards agents which one-box in the Transparent Newcomb's Problem over agents which two-box, but unless this sort of problem is more likely to arise than ones which reward agents whic
-1wedrifid11y
No, Transparent Newcomb's, Newcomb's and Prisoner's Dilemma with full mutual knowledge don't care what the decision algorithm is. They reward agents that take one box and mutually cooperate for no other reason than they decide to make the decision that benefits them. You have presented a fully general argument for making bad choices. It can be used to reject "look both ways before crossing a road" just as well as it can be used to reject "get a million dollars by taking one box". It should be applied to neither.
0Desrtopa11y
It's not a fully general counterargument, it demands that you weigh the probabilities of potential outcomes. If you look both ways at a crosswalk, you could be hit by a falling object that you would have avoided if you hadn't paused in that location. Does that justify not looking both ways at a crosswalk? No, because the probability of something bad happening to you if you don't look both ways at the crosswalk is higher than if you do. You can always come up with absurd hypotheticals which would punish the behavior that would normally be rational in a particular situation. This doesn't justify being paralyzed with indecision, the probabilities of the absurd hypotheticals materializing are miniscule. But the possibilities of absurd hypotheticals will tend to balance out other absurd hypotheticals. Transparent Newcomb's Problem is a problem that rewards agents which one-box in Transparent Newcomb's Problem, via Omega predicting whether the agent one-boxes in Transparent Newcomb's Problem and filling the boxes accordingly. Inverted Transparent Newcomb's Problem is one that rewards agents that two-box in Transparent Newcomb's Problem via Omega predicting whether the agent two-boxes in Transparent Newcomb's Problem, and filling the boxes accordingly. If one type of situation is more likely than the other, you adjust your expected utilities accordingly, just as you adjust your expected utility of looking both ways before you cross the street because you're less likely to suffer an accident if you do than if you don't.
0wedrifid11y
Yes. That isn't an 'inversion' but instead an entirely different problem in which agents are rewarded for things external to the problem.
0Desrtopa11y
There's no reason an agent you interact with in a decision problem can't respond to how it judges you would react to different decision problems. Suppose Andy and Sandy are bitter rivals, and each wants the other to be socially isolated. Andy declares that he will only cooperate in Prisoner's Dilemma type problems with people he predicts would cooperate with him, but not Sandy, while Sandy declares that she will only cooperate in Prisoner's Dilemma type problems with people she predicts would cooperate with her, but not Andy. Both are highly reliable predictors of other people's cooperation patterns. If you end up in a Prisoner's Dilemma type problem with Andy, it benefits you to be the sort of person who would cooperate with Andy, but not Sandy, and vice versa if you end up in a Prisoner's Dilemma type problem with Sandy. If you might end up in a Prisoner's Dilemma type problem with either of them, you have higher expected utility if you pick one in advance to cooperate with, because both would defect against an opportunist willing to cooperate with whichever one they ended up in a Prisoner's Dilemma with first. If you want to call it that, you may, but I don't see that it makes a difference. If ending up in Transparent Newcomb's Problem is no more likely than ending up in an entirely different problem which punishes agents for one-boxing in Transparent Newcomb's Problem, then I don't see that it's advantageous to one-box in Transparent Newcomb's Problem. You can draw a line between problems determined by factors external to the problem, and problems determined only by factors internal to the problem, but I don't think this is a helpful distinction to apply here. What matters is which problems are more likely to occur and their utility payoffs. In any case, I would honestly rather not continue this discussion with you, at least if TheOtherDave is still interested in continuing the discussion. I don't have very high expectations of productivity from a discussion
2Vladimir_Nesov11y
(I haven't followed the discussion, so might be missing the point.) If you are actually in problem A, it's advantageous to be solving problem A, even if there is another problem B in which you could have much more likely ended up. You are in problem A by stipulation. At the point where you've landed in the hypothetical of solving problem A, discussing problem B is a wrong thing to do, it interferes with trying to understand problem A. The difficulty of telling problem A from problem B is a separate issue that's usually ruled out by hypothesis. We might discuss this issue, but that would be a problem C that shouldn't be confused with problems A and B, where by hypothesis you know that you are dealing with problems A and B. Don't fight the hypothetical.
0Desrtopa11y
In the case of Transparent Newcomb's though, if you're actually in the problem, then you can already see either that both boxes contain money, or that one of them doesn't. If Omega only fills the second box, which contains more money, if you would one-box, then by the time you find yourself in the problem, whether you would one-box or two-box in Transparent Newcomb's has already had its payoff. If I would two-box in a situation where I see two transparent boxes which both contain money, that ensures that I won't find myself in a situation where Omega lets me pick whether to one-box or two-box, but only fills both boxes if I would one-box. On the other hand, A person who one-boxes in that situation could not find themself in a situation where they can pick one or both of two filled boxes, where Omega would only fill both boxes if they would two-box in the original scenario. So it seems to me that if I follow the principle of solving whatever situation I'm in according to maximum expected utility, then unless the Transparent Newcomb's Problem is more probable, I will become the sort of person who can't end up in Transparent Newcomb's problems with a chance to one-box for large amounts of money, but can end up in the inverted situation which rewards two-boxing, for more money. I don't have the choice of being the sort of person who gets rewarded by both scenarios, just as I don't have the choice of being someone who both Andy and Sandy will cooperate with. I agree that a one-boxer comes out ahead in Transparent Newcomb's, but I don't think it follows that I should one-box in Transparent Newcomb's, because I don't think having a decision theory which results in better payouts in this particular decision theory problem results in higher utility in general. I think that I "should" be a person who one-boxes in Transparent Newcomb's in the same sense that I "should" be someone who doesn't type between 10:00-11:00 on a Sunday if I happen to be in a world where Omega has,
2Vladimir_Nesov11y
We are not discussing what to do "in general", or the algorithms of a general "I" that should or shouldn't have the property of behaving a certain way in certain problems, we are discussing what should be done in this particular problem, where we might as well assume that there is no other possible problem, and all utility in the world only comes from this one instance of this problem. The focus is on this problem only, and no role is played by the uncertainty about which problem we are solving, or by the possibility that there might be other problems. If you additionally want to avoid logical impossibility introduced by some of the possible decisions, permit a very low probability that either of the relevant outcomes can occur anyway. If you allow yourself to consider alternative situations, or other applications of the same decision algorithm, you are solving a different problem, a problem that involves tradeoffs between these situations. You need to be clear on which problem you are considering, whether it's a single isolated problem, as is usual for thought experiments, or a bigger problem. If it's a bigger problem, that needs to be prominently stipulated somewhere, or people will assume that it's otherwise and you'll talk past each other. It seems as if you currently believe that the correct solution for isolated Transparent Newcomb's is one-boxing, but the correct solution in the context of the possibility of other problems is two-boxing. Is it so? (You seem to understand "I'm in Transparent Newcomb's problem" incorrectly, which further motivates fighting the hypothetical, suggesting that for the general player that has other problems on its plate two-boxing is better, which is not so, but it's a separate issue, so let's settle the problem statement first.)
0Desrtopa11y
Yes. I don't think that the most advantageous solution for isolated Transparent Newcomb's is likely to be a very useful question though. I don't think it's possible to have a general case decision theory which gets the best possible results for every situation (see the Andy and Sandy example, where getting good results for one prisoner's dilemma necessitates getting bad results from the other, so any decision theory wins in at most one of the two.) That being the case, I don't think that a goal of winning in Transparent Newcomb's Problem is a very meaningful one for a decision theory. The way I see it, it seems like focusing on coming out ahead in Sandy prisoner's dilemmas, while disregarding the relative likelihoods of ending up in a dilemma with Andy or Sandy, and assuming that if you ended up in an Andy prisoner dilemma you could use the same decision process to come out ahead in that too.
1wedrifid11y
Don't confuse an intuition aid that failed to help you with a personal insult. Apart from making you feel bad it'll ensure you miss the point. Hopefully Vladimir's explanation will be more successful.
-1Desrtopa11y
I didn't take it as a personal insult, I took it as a mistaken interpretation of my own argument which would have been very unlikely to come from someone who expected me to have reasoned through my position competently and was making a serious effort to understand it. So while it was not a personal insult, it was certainly insulting. I may be failing to understand your position, and rejecting it only due to a misunderstanding, but from where I stand, your assertion makes it appear tremendously unlikely that you understand mine. If you think that my argument generalizes to justifying any bad decision, including cases like not looking both ways when I cross the street, when I say otherwise, it would help if you would explain why you think it generalizes in this way in spite of the reasons I've given for believing otherwise, rather than simply repeating the assertion without acknowledging them, otherwise it looks like you're either not making much effort to comprehend my position, or don't care much about explaining yours, and are only interested in contradicting someone you think is wrong. Edit: I would prefer you not respond to this comment, and in any case I don't intend to respond to a response, because I don't expect this conversation to be productive, and I hate going to bed wondering how I'm going to continue tomorrow what I expect to be a fruitless conversation.
0findis11y
No, I don't, since you have a time-turner. (To be clear, non-hypothetical-me wouldn't hit non-hypothetical-you either.) I would also one-box if I thought that Omega's predictive power was evidence that it might have a time turner or some other way of affecting the past. I still don't think that's relevant when there's no reverse causality. Back to Newcomb's problem: Say that brown-haired people almost always one-box, and people with other hair colors almost always two-box. Omega predicts on the basis of hair color: both boxes are filled iff you have brown hair. I'd two-box, even though I have brown hair. It would be logically inconsistent for me to find that one of the boxes is empty, since everyone with brown hair has both boxes filled. But this could be true of any attribute Omega uses to predict. I agree that changing my decision conveys information about what is in the boxes and changes my guess of what is in the boxes... but doesn't change the boxes.
2Desrtopa11y
If the agent filling the boxes follows a consistent, predictable pattern you're outside of, you can certainly use that information to do this. In Newcomb's Problem though, Omega follows a consistent, predictable pattern you're inside of. It's logically inconsistent for you to two box and find they both contain money, or pick one box and find it's empty. Why is whether your decision actually changes the boxes important to you? If you know that picking one box will result in your receiving a million dollars, and picking two boxes will result in getting a thousand dollars, do you have any concern that overrides making the choice that you expect to make you more money? A decision process of "at all times, do whatever I expect to have the best results" will, at worst, reduce to exactly the same behavior as "at all times, do whatever I think will have a causal relationship with the best results." In some cases, such as Newcomb's problem, it has better results. What do you think the concern with causality actually does for you? We don't always agree here on what decision theories get the best results (as you can see by observing the offshoot of this conversation between Wedrifid and myself,) but what we do generally agree on here is that the quality of decision theories is determined by their results. If you argue yourself into a decision theory that doesn't serve you well, you've only managed to shoot yourself in the foot.
0findis11y
In the absence of my decision affecting the boxes, taking one box and leaving $1000 on the table still looks like shooting myself in the foot. (Of course if I had the ability to precommit to one-box I would -- so, okay, if Omega ever asks me this I will take one box. But if Omega asked me to make a decision after filling the boxes and before I'd made a precommitment... still two boxes.) I think I'm going to back out of this discussion until I understand decision theory a bit better.
4Desrtopa11y
Feel free. You can revisit this conversation any time you feel like it. Discussion threads never really die here, there's no community norm against replying to comments long after they're posted.

I'm Mike Johnson. I'd estimate I come across a reference to LW from trustworthy sources every couple of weeks, and after working my way through the sequences it feels like the good outweighs the bad and it's worth investing time into.

My background is in philosophy, evolution, and neural nets for market prediction; I presently write, consult, and am in an early-stage tech startup. Perhaps my highwater mark in community exposure has been a critique of the word Transhumanist at Accelerating Future. In the following years, my experience has been more mixed, but I appreciate the topics and tools being developed even if the community seems a tad insular. If I had to wear some established thinkers on my sleeve I'd choose Paul Graham, Lawrence Lessig, Steve Sailer, Gregory Cochran, Roy Baumeister, and Peter Thiel. (I originally had a comment here about having an irrational attraction toward humility, but on second thought, that might rule out Gregory "If I have seen farther than others, it's because I'm knee-deep in dwarves" Cochran… Hmm.)

Cards-on-the-table, it's my impression that

(1) Lesswrong and SIAI are doing cool things that aren't being done anywhere else (this is not faint... (read more)

5TheOtherDave12y
FWIW, I find your unvarnished thoughts, and the cogency with which you articulate them, refreshing. (The thoughts aren't especially novel, but the cogency is.) In particular, I'm interested in your thoughts on what benefits a greater focus on biologically inspired AGI might provide that a distaste for it would limit LW from concluding/achieving.
0johnsonmx12y
Thank you. I'd frame why I think biology matters in FAI research in terms of research applicability and toolbox dividends. On the first reason--- applicability--- I think more research focus on biologically-inspired AGI would make a great deal of sense is because the first AGI might be a biologically-inspired black box, and axiom-based FAI approaches may not particularly apply to such. I realize I'm (probably annoyingly) retreading old ground here with regard to which method will/should win the AGI race, but SIAI's assumptions seem to run counter to the assumptions of the greater community of AGI researchers, and it's not obvious to me the focus on math and axiology isn't a simple case of SIAI's personnel backgrounds being stacked that way. 'If all you have is a hammer,' etc. (I should reiterate that I don't have any alternatives to offer here and am grateful for all FAI research.) The second reason I think biology matters in FAI research--- toolbox dividends--- might take a little bit more unpacking. (Forgive me some imprecision, this is a complex topic.) I think it's probable that anything complex enough to deserve the term AGI would have something akin to qualia/emotions, unless it was specifically designed not to. (Corollary: we don't know enough about what Chalmers calls "psychophysical laws" to design something that lacks qualia/emotions.) I think it's quite possible that an AGI's emotions, if we did not control for their effects, could produce complex feedback which would influence its behavior in unplanned ways (though perfectly consistent with / determined by its programming/circuitry). I'm not arguing for a ghost in the machine, just that the assumptions which allow us to ignore what an AGI 'feels' when modeling its behavior may prove to be leaky abstractions in the face of the complexity of real AGI. Axiological approaches to FAI don't seem to concern themselves with psychophysical laws (modeling what an AGI 'feels'), whereas such modeling seems a co
2TheOtherDave12y
(nods) Regarding your first point... as I understand it, SI (it no longer refers to itself as SIAI, incidentally) rejects as too dangerous to pursue any approach (biologically inspired or otherwise) that leads to a black-box AGI, because a black-box AGI will not constrain its subsequent behavior in ways that preserve the things we value except by unlikely chance. The idea is that we can get safety only by designing safety considerations into the system from the ground up; if we give up control of that design, we give up the ability to design a safe system. Regarding your second point... there isn't any assumption that AGIs won't feel stuff, or that its feelings can be ignored. (Nor even that they are mere "feelings" rather than genuine feelings.) Granted, Yudkowski talks here about going out of his way to ensure something like that, but he treats this as an additional design constraint that adequate engineering knowledge will enable us to implement, not as some kind of natural default or simplifying assumption. (Also, I haven't seen any indication that this essay has particularly informed SI's subsequent research. Those more closely -- which is to say, at all -- affiliated with SI might choose to correct me here.) And there certainly isn't an expectation that its behavior will be predictable at any kind of granular level. What there is is the expectation that a FAI will be designed such that its unpredictable behaviors (including feelings, if it has feelings) will never act against its values, and such that its values won't change over time. So, maybe you're right that explicitly modeling what an AGI feels (again, no scare-quotes needed or desired) is critically important to the process of AGI design. Or maybe not. If it turns out to be, I expect that SI is as willing to approach design that way as any other. (Which should not be taken as an expression of confidence in their actual ability to design an AGI, Friendly or otherwise.) Personally, I find it unlikely
0johnsonmx12y
I definitely agree with your first paragraph (and thanks for the tip on SIAI vs SI). The only caveat is if evolved/brain-based/black-box AGI is several orders of magnitude easier to create than an AGI with a more modular architecture where SI's safety research can apply, that's a big problem. On the second point, what you say makes sense. Particularly, AGI feelings haven't been completely ignored at LW; if they prove important, SI doesn't have anything against incorporating them into safety research; and AGI feelings may not be material to AGI behavior anyway. However, I still do think that an ability to tell what feelings an AGI is experiencing-- or more generally, being able to look at any physical process and being able to derive what emotions/qualia are associated with it-- will be critical. I call this a "qualia translation function". Leaving aside the ethical imperatives to create such a function (which I do find significant-- the suffering of not-quite-good-enough-to-be-sane AGI prototypes will probably be massive as we move forward, and it behooves us to know when we're causing pain), I'm quite concerned about leaky reward signal abstractions. I imagine a hugely-complex AGI executing some hugely-complex decision process. The decision code has been checked by Very Smart People and it looks solid. However, it just so happens that whenever it creates a cat it (internally, privately) feels the equivalent of an orgasm. Will that influence/leak into its behavior? Not if it's coded perfectly. However, if something of its complexity was created by humans, I think the chance of it being coded perfectly is Vanishingly small. We might end up with more cats than we bargained for. Our models of the safety and stability dynamic of an AGI should probably take its emotions/qualia into account. So I think all FAI programmes really would benefit from such a "qualia translation function".
2TheOtherDave12y
I agree that, in order for me to behave ethically with respect to the AGI, I need to know whether the AGI is experiencing various morally relevant states, such as pain or fear or joy or what-have-you. And, as you say, this is also true about other physical systems besides AGIs; if monkeys or dolphins or dogs or mice or bacteria or thermostats have morally relevant states, then in order to behave ethically it's important to know that as well. (It may also be relevant for non-physical systems.) I'm a little wary of referring to those morally relevant states as "qualia" because that term gets used by so many different people in so many different ways, but I suppose labels don't matter much... we can call them that for this discussion if you wish, as long as we stay clear about what the label refers to. Leaving that aside... so, OK. We have a complex AGI with a variety of internal structures that affect its behavior in various ways. One of those structures is such that creating a cat gives the AGI an orgasm, which it finds rewarding. It wants orgasms, and therefore it wants to create cats. Which we didn't expect. So, OK. If the AGI is designed such that it creates more cats in this situation than it ought to (regardless of our expectations), that's a problem. 100% agreed. But it's the same problem whether the root cause lies within the AGI's emotions, or its reasoning, or its qualia, or its ability to predict the results of creating cats, or its perceptions, or any other aspect of its cognition. You seem to be arguing that it's a special problem if the failure is due to emotions or qualia or feelings? I'm not sure why. I can imagine believing that if I were overgeneralizing from my personal experience. When it comes to my own psyche, my emotions and feelings are a lot more mysterious than my surface-level reasoning, so it's easy for me to infer some kind of intrinsic mysteriousness to emotions and feelings that reasoning lacks. But I reject that overgeneralizatio
0johnsonmx12y
I don't think an AGI failing to behave in the anticipated manner due to its qualia* (orgasms during cat creation, in this case) is a special or mysterious problem, one that must be treated differently than errors in its reasoning, prediction ability, perception, or any aspect of its cognition. On second thought, I do think it's different: it actually seems less important than errors in any of those systems. (And if an AGI is Provably Safe, it's safe-- we need only worry about its qualia from an ethical perspective.) My original comment here is (I believe) fairly mild: I do think the issue of qualia will involve a practical class of problems for FAI, and knowing how to frame and address them could benefit from more cross-pollination from more biology-focused theorists such as Chalmers and Tononi. And somewhat more boldly, a "qualia translation function" would be of use to all FAI projects. *I share your qualms about the word, but there really are few alternatives with less baggage, unfortunately.
1TheOtherDave12y
Ah, I see. Yeah, agreed that what we are calling qualia here (not to be confused with its usage elsewhere) underlie a class of practical problems. And what you're calling a qualia translation function (which is related to what EY called a non-person predicate elsewhere, though finer-grained) is potentially useful for a number of reasons.
2Kawoomba12y
If that were the case (and it may very well be), there goes provably friendly AI, for to guarantee a property under all circumstances, it must be upheld from the bottom layer upwards.
0johnsonmx12y
I think it's possible that any leaky abstraction used in designing FAI might doom the enterprise. But if that's not true, we can use this "qualia translation function" to make a leaky abstractions in a FAI context a tiny bit safer(?). E.g., if we're designing an AGI with a reward signal, my intuition is we should either (1) align our reward signal with actual pleasurable qualia (so if our abstractions leak it matters less, since the AGI is drawn to maximize what we want it to maximize anyway); (2) implement the AGI in an architecture/substrate which produces as little emotional qualia as possible, so there's little incentive for behavior to drift. My thoughts here are terribly laden with assumptions and could be complete crap. Just thinking out loud.
0hairyfigment12y
As a layman I don't have a clear picture of how to start doing that. How would it differ from this? Looks like you can find the paper in question here (WARNING: out-of-date 2002 content).
0johnsonmx12y
I'd say nobody does! But a little less glibly, I personally think the most productive strategy in biologically-inspired AGI would be to focus on tools that help quantify the unquantified. There are substantial side-benefits to such a focus on tools: what you make can be of shorter-term practical significance, and you can test your assumptions. Chalmers and Tononi have done some interesting work, and Tononi's work has also had real-world uses. I don't see Tononi's work as immediately applicable to FAI research but I think it'll evolve into something that will apply. It's my hope that the (hypothetical, but clearly possible) "qualia translation function" I mention above could be a tool that FAI researchers could use and benefit from regardless of their particular architecture.

Hello everyone, Like many people, I come to this site via an interest in transhumanism, although it seems unlikely to me that FAI implementing CEV can actually be designed before the singularity (I can explain why, and possibly even what could be done instead, but it suddenly occurred to me that it seems presumptuous of me to criticize a theory put forward by very smart people when I only have 1 karma...).

Oddly enough, I am not interested in improving epistemic rationality right now, partially because I am already quite good at it. But more than that, I am trying to switch it off when talking to other people, for the simple reason (and I'm sure this has already been pointed out before) that if you compare three people, one who estimates the probability of an event at 110%, one who estimates it at 90%, and one who compensates for overconfidence bias and estimates it at 65%, the first two will win friends and influence people, while the third will seem indecisive (unless they are talking to other rationalists). I think I am borderline asperger's (again, like many people here) and optimizing social skills probably takes precedence over most other things.

I am currently doing a PhD in ... (read more)

I am not interested in improving epistemic rationality right now, partially because I am already quite good at it.

But remember that it's not just your own rationality that benefits you.

it seems presumptuous of me to criticize a theory put forward by very smart people when I only have 1 karma

Presume away. Karma doesn't win arguments, arguments win karma.

0skeptical_lurker12y
Are you saying that improving epistemic rationality is important because it benefits others as well as myself? This is true, but there are many other forms of self-improvement that would also have knock-on effects that benefit others. I have actually read most of the relevant sequences, epistemic rationality really isn't low-hanging fruit anymore for me, although I wish I had known about cognitive biases years ago.
1Robert Miles12y
No, I'm saying that improving the epistemic rationality of others benefits everyone, including yourself. It's not just about improving our own rationality as individuals, it's about trying to improve the rationality of people-in-general - 'raising the sanity waterline'.
2skeptical_lurker12y
Ok, I see what you mean now. Yes, this is often true, but again, I am trying to be less preachy (at least IRL) about rationality - if someone believes in astrology, or faith healing, or reincarnation then: (a) their beliefs probably bring them comfort (b) Trying to persuade them is often like banging my head against a brick wall (c) even the notion that there can be such a thing as a correct fact, independent of subjective mental states is very threatening to some people and I don't want to start pointless arguments So unless they are acting irrationally in a way which harms other people, or they seem capable of having a sensible discussion, or I am drunk, I tend to leave them be.
2wedrifid12y
Many here would agree with you. (And, for instance, consider a ~10% chance of success better than near certain extinction.)
0skeptical_lurker12y
I agree that 10% chance of success is better than near zero, and furthermore I agree that expected utility maximization means that putting in a great deal of effort to achieve a positive outcome is wiser than saying "oh well, we're doomed anyway, might as well party hard and make the most of the time we have left". However, the question is whether, if FAI has a low probability of success, are other possibilities, e.g. tool AI a better option to pursue?
0[anonymous]12y
Would you say that many people here (and yourself?) believe that the probable end of our species is within the next century or two?
2Nornagest12y
The last survey reported that Less Wrongers on average believe that humanity has about a 68% chance of surviving the century without a disaster killing >90% of the species. (Median 80%, though, which might be a better measure of the community feeling than the mean in this case.) That's a lower bar than actual extinction, but also a shorter timescale, so I expect the answer to your question would be in the same ballpark.
0wedrifid12y
For myself: Yes! p(extinct within 200 years) > 0.5
1John_Maxwell12y
Welcome! IMO you should definitely do it. Even if LW karma is good an indicator of good ideas, more information rarely hurts, especially on a topic as important as this.
9skeptical_lurker12y
Ok - although maybe I should stick it in its own thread? I realize much of this has been said before. Part 1 : AGI will come before FAI, because: Complexity of algorithm design: Intuitively, FAI seems orders of magnitude more complex than AGI. If I decided to start trying to program an AGI tomorrow, I would have ideas on how to start, and maybe even make a minuscule amount of progress. Ben Goertzel even has a (somewhat optimistic) roadmap for AGI in a decade. Meanwhile, afaik FAI is still stuck at the stage of lob’s theorem. The fact that EY seems to be focusing on promoting rationality and writing (admittedly awesome) harry potter fanfiction seems to indicate that he doesn’t currently know how to write FAI (and nor does anyone else) otherwise he would be focusing on that now, and instead is planning for the long term. Computational complexity CEV requires modelling (and extrapolating) every human mind on the planet, while avoiding the creation of sentient entities. While modelling might be cheaper than ~10^17 flops per human due to short cuts, I doubt it’s going to come cheap. Randomly sampling a subset of humanity to extrapolate from, at least initially, could make this problem less severe. Furthermore, this can be partially circumvented by saying that the AI follows a specific utility function while bootstrapping to enough computing power to implement CEV, but then you have the problem of allowing it to bootstrap safely. Having to prove friendliness of each step in self-improvement strikes me as something that could also be costly. Finally, I get the impression that people are considering using Solomonoff induction. It’s uncomputable, and while I realize that there exist approximations, I would imagine that these would be extremely expensive to calculate anything non-trivial. Is there any reason for using SI for FAI more than AGI, e.g. something todo with provability about the programs actions? Infeasibility of relinquishment. If you can’t convince Ben Goer
1shminux12y
Definitely worth its own Discussion post, once you have min karma, which should not take long.
0beoShaffer12y
They already have it.
0Swimmer963 (Miranda Dixon-Luinenburg) 12y
Welcome! Made me think of this article. Yes, you may be able, in the short run, to win friends and influence people by tricking yourself into being overconfident. But that belief is only in your head and doesn't affect the universe–thus doesn't affect the probability of Event X happening. Which means that if, realistically, X is 65% likely to happen, then you with your overconfidence, claiming that X is bound to happen, will eventually look like a fool 35% of the time, and will make it hard for yourself to leave a line of retreat. Conclusion: in the long run, it's very good to be honest with yourself about your predictions of the future, and probably preferable to be honest with others, too, if you want to recruit their support.
5skeptical_lurker12y
Excellent points, and of course it is situation dependent - if one makes erroneous predictions on archived forms of communication, e.g. these posts, then yes these predictions can come back to haunt you, but often, especially in non-archived communications, people will remember the correct predictions and forget the false ones. It should go without saying that I do not intend to be overconfident on LW - if I was going to be, then the last thing I would do is announce this intention! In a strange way, I seem to want to hold three different beliefs: 1) An accurate assessment of what will happen, for planning my own actions. 2) A confidant, stopping just short of arrogant, belief in my predictions for impressing non-rationalists. 3) An unshakeable belief in my own invincibility, so that psychosomatic effects keep me healthy. Unfortunately, this kinda sounds like "I want to have multiple personality disorder".
4Strange712y
If you're going to go that route, at least research it first. For example: http://healthymultiplicity.com/
0skeptical_lurker12y
Thanks for the advice, but I don't actually want to have multiple personality disorder - I was just drawing an analogy.
5TheOtherDave12y
Hm. So, call -C1 the social cost of reporting a .9 confidence of something that turns out false, and -C2 the social cost of reporting a .65 confidence of something that turns out false. Call C3 the benefit of reporting .9 confidence of something true, and C4 the benefit of .65 confidence. How confident are you that that (.65C3 -.35C1) < (.65C4-.35C2)?
1skeptical_lurker12y
In certain situations, such as sporting events which do not involve betting, my confidence that (.65C3 -.35C1) < (.65C4-.35C2) is at most 10%. In these situations confidence is valued far more that epistemic rationality.
1Swimmer963 (Miranda Dixon-Luinenburg) 12y
I would say I'm about 75% confident that (.65C3 -.35C1) < (.65C4-.35C2)... But one of the reasons I don't even want to play that game is that I feel I am completely unqualified to estimate probabilities about that, and most other things. I would have no idea how to go about estimating the probability of, for example, the Singularity occurring before 2050...much less how to compensate for biases in my estimate. I think I also have somewhat of an ick reaction towards the concept of "tricking" people to get what you want, even if in a very subtle form. I just...like...being honest, and it's hard for me to tell if my arguments about honesty being better are rationalizations because I don't want being dishonest to be justifiable.
2Mass_Driver12y
The way to bridge that gap is to only volunteer predictions when you're quite confident, and otherwise stay quiet, change the subject, or murmur a polite assent. You're absolutely right that explicitly declaring a 65% confidence estimate will make you look indecisive -- but people aren't likely to notice that you make predictions less often than other people -- they'll be too focused on how when you do make predictions, you have an uncanny tendency to be correct...and also that you're pleasantly modest and demure, too.
0TheOtherDave12y
(nods) That makes sense.

Hi! Given how much time I've spent reading this site and its relatives, this post is overdue.

I'm 35, male, British and London-based, with a professional background in IT. I was raised Catholic, but when I was about 12, I had a de-conversion experience while in church. I remember leaving the pew during mass to go to the toilet, then walking back down the aisle during the eucharist, watching the priest moving stuff around the altar. It suddenly struck me as weird that so many people had gathered to watch a man in a funny dress pour stuff from one cup to another. So I identified as atheist or humanist for a long time. I can't remember any incident that made me start to identify as a rationalist, but I've been increasingly interested in evidence, biases and knowledge for over ten years now.

I've been lucky, I think, to have some breadth in my education: I studied Physics & Philosophy as an undergrad, Computer Science as a postgrad, and more recently rounded that off with an MBA. This gives me a handy toolset for approaching new problems, I think. I definitely want to learn more statistics though - it feels like there's a big gap in the arsenal.

There are a few stand-out things I have... (read more)

[-][anonymous]11y130

Background:

21-year old transgender-neither. I spent 13 years enveloped by Mormon culture and ideology, growing up in a sheltered environment. Then, everything changed when the Fire nation attacked.

Woops. Off-track.

I want my actions to matter, not from others remembering them but from me being alive to remember them . In simpler terms, I want to live for a long time - maybe forever. Death should be a choice, not an unchanging eventuality.

But I don't know where to start; I feel overwhelmed by all the things I need to learn.

So I've come here. I'm reading the sequences and trying to get a better grasp on thinking rationally, etc., but was hoping to get pointers from the more experienced.

What is needed right now? I want to do what I can to help not only myself, but those whose paths I cross.

~Jenna

6Alicorn11y
Is this the same thing as "agender"? <3!!
1[anonymous]11y
Yes, it's the same. Transgender-neither sounds better to me, though, so I used that term. But if I find that agender is more accessible I'll switch. And yep, I'm an Avatar the Last Airbender junkie. :)
4Nisan11y
Welcome! Have you considered signing up for cryonics?
2[anonymous]11y
Aside from the occasional X-files episode and science fiction reading, I don't know much about cryonics. I considered it as a possibility but dislike that it means I'm 'in suspense' while the world is continuing on without me. I want to be an active participant! :D
3shminux11y
Certainly, but when you no longer can be, it's nice to have an option of becoming one again some day.
4EHeller11y
Option might be too strong a word. Its nice to have the vanishingly-small possibility. I think its important for transhumanists to remind ourselves that cryonics is unlikely to actually work, its just the only hail-mary available.
4Error11y
I think it might be important to remind others of that too, when discussing the subject. Especially for people who are signed up but have a skeptical social circle, "this seems like the least-bad of a set of bad options" may be easier for them to swallow than "I believe I'm going to wake up one day."
2Eliezer Yudkowsky11y
Far as I can tell, the basic tech in cryonics should basically work. Storage organizations are uncertain and so is the survival of the planet. But if we're told that the basic cryonics tech didn't work, we've learned some new fact of neuroscience unknown to present-day knowledge. Don't assign vanishingly small probabilities to things just because they sound weird, or it sounds less likely to get funny looks if you can say that it's just a tiny chance. That is not how 'probability' works. Probabilities of basic cryonics tech working are questions of neuroscience, full stop; if you know the basic tech has a tiny probability of working, you must know something about current vitrification solutions or the operation of long-term memory which I do not.
7Kawoomba11y
I'd say full speed ahead, Cap'n. Basic cryonics tech working - while being a sine qua non - isn't the ultimate question for people signing up for cryonics. It's just a term in the probability calculation for the actual goal: "Will I be revived (in some form that would be recognizable to my current self as myself)?" (You've mentioned that in the parent comment, but it deserves more than a passing remark.) And that most decidedly requires a host of complex assumptions, such as "an agent / a group of agents will have an interest in expending resources into reviving a group of frozen old-version homo sapiens, without any enhancements, me among them", "the future agents' goals cannot be served merely by reading my memory engrams, then using them as a database, without granting personhood", "there won't be so many cryo-patients at a future point (once it catches on with better tech) that thawing all of them would be infeasible, or disallowed", not to mention my favorite "I won't be instantly integrated into some hivemind in which I lose all traces of my individuality". What we're all hoping for, of course, is for a benevolent super-current-human agent - e.g. an FAI - to care enough about us to solve all the technical issues and grant us back our agent-hood. By construction at least in your case the advent of such an FAI would be after your passing (you wouldn't be frozen otherwise). That means that you (of all people) would also need to qualify the most promising scenario "there will be a friendly AI to do it" with "and it will have been successfully implemented by someone other than me". Also, with current tech not only would true x-risks preclude you from ever being revived, even non x-risk catastrophic events (partial civilizatory collapse due to Malthusian dynamics etc.) could easily destroy the facility you're held in, or take away anyone's incentive to maintain it. (TW: That's not even taking into account Siam the Star Shredder.) I'm trying to avoid motivated co
6EHeller11y
I actually am signed up for cryonics. My issue with the basic tech is that liquid nitrogen, while a cheap storage method, is too cold to avoid fracturing. Experience with imaging systems leads me to believe that fractures will interfere with reconstructions of the brain's geometry, and cryoprotectants obviously destroy chemical information. Now, it seems likely to me that at some point in the future the fracturing problem can be solved, or at least mitigated, by intermediate temperature storing and careful cooling processes, but that won't fix the bodies frozen today. So I don't doubt that (barring large neuroscience related, unquantifiable uncertainty) cryonics may improve to the point where the tech is likely to work (or be supplanted by plastination methods,etc), it is not there now, and what matters for people frozen today is the state of cryonics today. Saying there are no fundamental scientific barriers to the tech working is not the same thing as saying the hard work of engineering has been done and the tech currently works. Edit: I also have a weak prior that the chemical information in the brain is important, but it is weak.
9Eliezer Yudkowsky11y
Since this is the key point of neuroscience, do you want to expand on it? What experience with imaging leads you to believe that fractures (of incompletely vitrified cells) will implement many-to-one mappings of molecular start states onto molecular end states in a way that overlaps between functionally relevant brain states? What chemical information is obviously destroyed and is it a type that could plausibly play a role in long-term memory?
3shminux11y
"many-to-one mappings of molecular start states onto molecular end states in a way that overlaps between functionally relevant brain states" is probably too restrictive. I would use "possibly functionally different, but subjectively acceptably close brain states".
3EHeller11y
The cryoprotectants are toxic, they will damage proteins (misfolds, etc) and distort relative concentrations throughout the cell. This information is irretrievable once the damage is done. This is what I refereed to when I said obviously destroyed chemical information. It is our hope that such information is unimportant, but my (as I said above fairly uncertain) prior would be that the synaptic protein structures are probably important. My prior is so weak because I am not an expert on biochemistry or neuroscience. As to the physical fracture, very detailed imaging would have to be done on either side of the fracture in order to match the sides back up, and this is related to a problem I do have some experience with. I'm familiar with attempts to use synchrotron radiation to image protein structures, which has a percolation problem- you are damaging what you are trying to image while you image it. If you have lots of copies of what you want to image, this is a solvable problem, but with only one original you are going to lose information. Edit: in regards to the first point, kalla724 makes the same point with much more relevant expertise in this thread http://lesswrong.com/r/discussion/lw/8f4/neil_degrasse_tyson_on_cryogenics/ His experience working with synapses leads him to a much stronger estimate that cryoprotectants cause irreversible damage. I may strengthen my prior a bit.
7Eliezer Yudkowsky11y
How do you know? I'm not asking for some burden of infinite proof where you have to prove that the info can't be stored elsewhere. I am asking whether you know that widely functionally different start states are being mapped onto an overlapping spread of molecularly identical end states, and if so, how. E.g., "denaturing either conformation A or conformation B will both result in denatured conformation C and the A-vs.-B distinction is just a little twist of this spatially isolated thingy here so you wouldn't expect it to be echoed in any exact nearby positions of blah" or something.
6EHeller11y
So what I'm thinking about is something like this: imagine an enzyme,present at two sites on the membrane and regulated by an inhibitor. Now a toxin comes along and breaks the weak bonds to the inhibitor, stripping them off. Information about which site was inhibited is gone. If the inhibitor has some further chemical involvement with the toxin, or if the toxin pops the enzymes off the membrane all together you have more problems. You might not know how many enzymes were inhibited, which sites were occupied, or which were inhibited. I could also imagine more exotic cases where a toxin induces a folding change in one protein, which allows it to accept a regulator molecule meant for a different protein. Now to figure out our system we'd need to scan at significantly smaller scales to try to discern those regulator molecules. I don't have the expertise to estimate if this is likely. To reiterate, I am not by any means a neuroscientist (my training is physics and my work is statistics), so its possible this sort of information just isn't that important, but my suspicion is that it is. Edited to fix an embarrassing except/accept mistake.
7Eliezer Yudkowsky11y
(Scanning at significantly smaller scales should always be assumed to be fine as long as end states are distinguishable up to thermal noise!) Okay, I agree that if this takes place at a temperature where molecules are still diffusing at a rapid pace and there's no molecular sign of the broken bond at the bonding site, then it sounds like info could be permanently destroyed in this way. Now why would you think this was likely with vitrification solutions currently used? Is there an intuition here about ranges of chemical interaction so wide that many interactions are likely to occur which break such bonds and at least one such interaction is likely to destroy functionally critical non-duplicated info? If so, should we toss out vitrification and go back to dropping the head in liquid nitrogen because shear damage from ice freezing will produce fewer many-to-one mappings than introducing a foreign chemical into the brain? I express some surprise because if destructive chemical interactions were that common with each new chemical introduced then the problem of having a whole cell not self-destruct should be computationally unsolvable for natural selection, unless the chemicals used in vitrification are unusually bad somehow.
3EHeller11y
This has some problems- fundamentally the length scale probed is inversely proportional to the energy required, which means increasing the resolution increases the damage done by scanning. You start getting into issues of 'how much of this can I scan before I've totally destroyed this?' which is a sort of percolation problem (how many amino acids can I randomly knock out of a protein before it collapses or rebonds into a different protein?), so scanning at resolutions with energy equivalent above peptide bonds is very problematic. Assuming peptide bond strength of a couple kj/mol, I get lower-limit length scales of a few microns (this is rough, and I'd appreciate if someone would double check). The vitrification solutions currently used are know to be toxic, and are used at very high concentrations, so some of this sort of damage will occur. I don't know enough biochemistry to say anything else with any kind of definitety, but on the previous thread kalla724 seemed to have some domain specific knowledge and thought the problem would be severe. No, not at all. The vitrification damage is orders of magnitude less. Destroying a few multi-unit proteins and removing some inhibitors seems much better than totally destroying the cell-membrane (which has many of the same "which sites were these guys attached to?" problems). Its my (limited) understanding that the cell membrane exist to largely solve this problem. Also, introducing tiny bits of toxins here and there causes small amounts of damage but the cell could probably survive. Putting the cell in a toxic environment will inevitably kill it. The concentration matters. But here I'm stepping way outside anything I know about.
6Eliezer Yudkowsky11y
We seem to have very different assumptions here. I am assuming you can get up to the molecule and gently wave a tiny molecular probe in its direction, if required. I am not assuming that you are trying to use high-energy photons to photograph it. You also still seem to be use a lot of functional-damage words like "destroying" which is why I don't trust your or kalla724's intuitions relative to the intuitions of other scientists with domain knowledge of neuroscience who use the language of information theory when assessing cryonic feasibility. If somebody is thinking in terms of functional damage (it doesn't restart when you reboot it, oh my gosh we changed the conformation look at that damage it can't play its functional role in the cell anymore!) then their intuitions don't bear very well on the real question of many-to-one mapping. What does the vitrification solution actually do that's supposed to irreversibly map things, does anyone actually know? The fact that a cell can survive with a membrane, at all, considering the many different molecules inside it, imply that most molecules don't functionally damage most other molecules most of the time, never mind performing irreversible mappings on them. But then this is reasoning over molecules that may be of a different type then vitrificants. At the opposite extreme, I'd expect introducing hydrochloric acid into the brain to be quite destructive.
3EHeller11y
How are you imaging this works? I'm aware of chemistry that would allow you to say there are X whatever proteins, and Y such-and-such enzymes,etc, but such chemical processes I don't think are good enough for the sort of geometric reconstruction needed. Its not obvious to me that a molecular probe of the type you imagine can exist. What exactly is it measuring and how is it sensitive to it? Is it some sort of enzyme? Do we thaw the brain and then introduce these probes in solution? Do we somehow pulp the cell and run the constituents through a nanopore type thing and try to measure charge? I would love to be convinced I am overly pessimistic, and pointing me in the direction of biochemists/neuroscientists/biophysicists who disagree with me would be welcome. I only know a few biophysicists and they are generally more pessimistic than I am. I know ethylene glycol is cytotoxic, and so interacts with membrane proteins, but I don't know the mechanism.

I'll quickly point you at Drexler's Nanosystems and Freitas's Nanomedicine though they're rather long and technical reads. But we are visualizing molecularly specified machines, and 'hell no' to thawing first or pulping the cell. Seriously, this kind of background assumption is why I have to ask a lot of questions instead of just taking this sort of skeptical intuition at face value.

But rather than having to read through either of those sources, I would ask you to just take on assumption that two molecularly distinct (up to thermal noise) configurations will somehow be distinguishable by sufficiently advanced technology, and describe what your intuitions (and reasons) would be taking that premise at face value. It's not your job to be a physicist or to try to describe the theoretical limits of future technology, except of course that two systems physically identical up to thermal noise can be assumed to be technologically indistinguishable, and since thermal noise is much larger than exact quark positions it will not be possible to read off any subtle neural info by looking at exact quark positions (now that might be permanently impossible), etc. Aside from that I would encoura... (read more)

4EHeller11y
Do you have a page number in Nanosystems for a references to a sensing probe? Also, this is tangential to the main discussion, so I'll take pointers to any reference you have and let this drop. I was using cytotoxic in the very specific sense of "interacts and destabilizes the cell membrane," which is doing the sort of operations we agreed in principle can be irreversible. Estimates as to how important this sort of information actually is are impossible for me to make, as I lack the background. What I would love to see is someone with some domain specific knowledge explaining why this isn't an issue.
0[anonymous]11y
Boom. http://www.nature.com/news/diamond-defects-shrink-mri-to-the-nanoscale-1.12343
0Eliezer Yudkowsky11y
Sorry, but can you again expand on this? What happens?
6EHeller11y
So I cracked open a biochem book to avoid wandering off a speculative pier,as we were moving beyond what I readily knew. A simple loss of information presented itself. Some proteins can have two states, open and closed, which operate on a hydrophobic/hydrophilic balance. In dessicated cells or if the proteins denature for some other reason, the open/closed state will be lost. Adding cryoprotectants will change osmotic pressure and the cell will dessicate, and the open/closed state will be lost.
3Eliezer Yudkowsky11y
Do we know about any such proteins related to LTM? Can we make predictions about what it takes to erase C. elegans maze memory this way?

Would strongly predict that such changes erase only information about short term activity, not long term memory. Protein conformation in response to electrochemical/osmotic gradients operates on the timescale of individual firings, it's probably too flimsy to encode stable memories. These should be easy for Skynet to recover.

Higher level pattens of firings might conceivably store information, but experience with anaesthesia, hypothermia etc. says they do not. Or we've been killing people and replacing them all this time... a possibility which thanks to this site I'm prepared to consider..

Oh, and

Do you have a page number in Nanosystems for a references to a sensing probe?

Bam.

http://www.nature.com/news/diamond-defects-shrink-mri-to-the-nanoscale-1.12343

2EHeller11y
Here we have moved far past my ability to even speculate.
1lsparrish11y
Presumably you can use google and wikipedia to fill in the gaps just like the rest of us. Wikipedia: Long-term memory What I worry about being confused on when reading the literature is the distinction between forming memories in the first place, and actually encoding for memory. Another critical distinction is that, proteins that are needed to prevent degradation of memories over time (which get lots of research and emphasis in the literature due to their role in preventing degenerative diseases) aren't necessarily the ones directly encoding for the memories.
5EHeller11y
So in subjects I know a lot about, I have dealt with many people who pick up strange notions by filling in the gaps from google and wikipedia with a weak foundation. The work required to effectively figure out what specific damage to the specific proteins you mentioned could be done by desiccation of a cell is beyond my knowledge base, so I leave it to someone more knowledgeable than myself(perhaps you?) to step in. What open/closed states does PKMζ have? What regulates those open/closed states? Are the open/closed states important to its roll (it looks like yes given the notion of the inhibitor?)?
0lsparrish11y
Yes, it's important to build a strong foundation before establishing firm opinions. Also, in this particular case note that science appears to have recently changed it's mind based on further evidence, which goes to show that you have to be careful when reading wikipedia. Apparently the protein in question is not so likely to underlie LTM after all, as transgenic mice lacking it still have LTM (exhibiting maze memory, LTP, etc). The erasure of memory is linked to zeta inhibitory peptide (ZIP), which incidentally happens in the transgenic mice as well. ETA: Apparently PKMzeta can be used to restore faded memories erased with ZIP.
3lsparrish11y
Now you know why I'm so keen on the idea of figuring out a way to get something like trehalose into the cell. Neurons tend to lose water rather than import cryoprotectants because of their myelination. Trehalose protects against dessication by cushioning proteins from hitting each other. Other kinds of solute that can get past the membrane could balance out the osmotic pressure (that's kind of the point of penetrating cryoprotectants) just as well, but I like trehalose because of its low toxicity.
2orthonormal11y
Nanotechnology, not chemical analysis. Drexler's Engines of Creation contains a section on the feasibility of repairing molecular damage in this way. Since (if our current understanding holds) nanobots can be functional on a smaller scale than proteins (which are massive chunks held together Lego-style by van der Walls forces), they can be introduced within a cell membrane to probe, report on, and repair damaged proteins.
0EHeller11y
I have not read Engine's of Creation, but I have read his thesis and I was under the impression most of the proposed systems would only work in vacuum chambers as the would oxidize extremely rapidly in an environment like the body. Has someone worked around this problem, even in theory? Also, I've seen molecular assembler designs of various types in various speculative papers, but I've never seen a sensing apparatus. Any references?
0orthonormal11y
Later in the thread, Eliezer recommended Drexler's followup Nanosystems and Freitas' Nanomedicine, neither of which I've read, but I'd be surprised if the latter didn't address this issue. Sorry that I in particular don't think this is a worrisome objection, but it's on the same level as saying that electronics could never be helpful in the real world because water makes them malfunction. You start by showing that something works under ideal conditions, and then you find a way to waterproof it. For the convenience of later readers: someone elsewhere in the thread linked an actual physical experimental example.
2EHeller11y
Not that I have seen, but I'm only partially through it. And its an awesome example from just a few months ago! Pushing NMR from mm resolutions down to nm resolutions is a truly incredibly feat!
0Strange711y
The end states don't need to be identical, just indistinguishable.
3Eliezer Yudkowsky11y
To presume that states non-identical up to thermal noise are indistinguishable seems to presume either lower technology than the sort of thing I have in mind, or that you know something I don't about how two physical states can be non-identical up to thermal noise and yet indistinguishable.
6Nisan11y
Do you think it's at all likely that the connectome can be recovered after fracturing by "matching up" the structure on either side of the fracture?
0shminux11y
Just to be a cryo advocate here for a moment, if the information of interest is distributed rather than localized, like in a hologram (or any other Fourier-type storage), there is a chance that one can be recovered as a reasonable facsimile of the frozen person, with maybe some hazy memories (corresponding to the lowered resolution of a partial hologram). I'd still rather be revived but having trouble remembering someone's face or how to drive a car, or how to solve the Schrodinger equation, than not to be revived at all. Even some drastic personality changes would probably be acceptable, given the alternative.
2EHeller11y
Oh, sure. Or if the sort of information that gets destroyed relates to what-I-am-currently-thinking, or something similar. If I wake up and don't remember the last X minutes,or hours, big deal. But when we have to postulate certain types of storage for something to work, it should lower our probability estimates.
0TheOtherDave11y
Do you have a sense of how drastic a personality change has to be before there's someone else you'd rather be resurrected instead of drastically-changed-shminux?
0shminux11y
Not really. This would require solving the personal identity problem, which is often purported to have been solved or even dissolved, but isn't. I'm guessing that there is no actual threshold, but a fuzzy fractal boundary which heavily depends on the person in question. While one may say that if they are unable to remember the faces and names of their children and no longer able to feel the love that they felt for them, it's no longer them, and they do not want this new person to replace them, others would be reasonably OK with that. The same applies to the multitude of other memories, feelings, personality traits, mental and physical skills and whatever else you (generic you) consider essential for your identity.
0TheOtherDave11y
Yeah, I share your sense that there is no actual threshold. It's also not clear to me that individuals have any sort of specifiable boundary or what is or isn't "them", however fuzzy or fractal, so much as they have the habit of describing themselves in various ways.
5shminux11y
Is this your true objection? What potential discovery in neuroscience would cause you to abandon cryonics and actively look for other ways to preserve your identity beyond the natural human lifespan? (This is a standard question one asks a believer to determine whether the belief in question is rational -- what evidence would make you stop believing?)

Anders Sandberg who does get the concept of sufficiently advanced technology posts saying, "Shit, turns out LTM seems to depend really heavily on whether protein blah has conformation A and B and the vitrification solution denatures it to C and it's spatially isolated so there's no way we're getting the info back, it's possible something unknown embodies redundant information but this seems really ubiquitous and basic so the default assumption is that everyone vitrified is dead". Although, hm, in this case I'd just be like, "Okay, back to chopping off the head and dropping it in a bucket of liquid nitrogen, don't use that particular vitrification solution". I can't think offhand of a simple discovery which would imply literally giving up on cryonics in the sense of "Just give up you can't figure out how to freeze people ever." I can certainly think of bad news for particular techniques, though.

2shminux11y
OK. More instrumentally, then. What evidence would make you stop paying the cryo insurance premiums with CI as the beneficiary and start looking for alternatives?
5Eliezer Yudkowsky11y
Anders publishes that, CI announces they intend to go on vitrifying patients anyway, Alcor offers a chop-off-your-head-and-dunk-in-liquid-nitro solution. Not super plausible but it's off the top of my head.
6shminux11y
No pun intended?
-2Kawoomba11y
Can you name currently available alternatives to cryonics which accomplish a similar goal? Apologies, misinterpreted the question.
6shminux11y
Not really, but yours is an uncharitable interpretation of my question, which is to evaluate the utility of spending some $100/mo on cryo vs spending it on something (anything) else, not "I have this dedicated $100/mo lying around which I can only spend toward my personal future revival".

Personally, I would be very impressed if anyone could demonstrate memory loss in a cryopreserved and then revived organism, like a bunch of C. elegans losing their maze-running memories. They're very simple, robust organisms, it's a large crude memory, the vitrification process ought to work far better on them than a human brain, and if their memories can't survive, that'd be huge evidence against anything sensible coming out of vitrified human brains no matter how much nanotech scanning is done (and needless to say, such scanning or emulation methods can and will be tested on a tiny worm with a small fixed set of neurons long before they can be used on anything approaching a human brain). It says a lot about how poorly funded cryonics research is that no one has done this or something similar as far as I know.

2shminux11y
Hmm, I wonder how much has been done on figuring out the memory storage in this organism. Like, if you knock out a few neurons or maybe synapses, how much does it forget?
2gwern11y
Since it's C. elegans, I assume the answer is 'a ton of work has been done', but I'm too tired right now to go look or read more medical/biological papers.
0Eliezer Yudkowsky11y
I'm not totally sure I'd call this sufficient evidence since functional damage != many-to-one mapping but it would shave some points off the probability for existing tech and be a pointer to look for the exact mode of functional memory loss.
4wedrifid11y
He's kind of been working on that for a while now. (I suppose that works either as "subvert the natural human lifespan entirely through creating FAI" or "preserve his identity for time immemorial in the form of 'Harry-Stu' fanfiction" depending on how cynical one is feeling.)
2orthonormal11y
In my case, to name one contingency: if the NEMALOAD Project finds that analysis of relatively large cellular structures doesn't suffice to predict neuronal activity, and concludes that the activity of individual molecules is essential to the process, then I'd become significantly more worried about EHeller's objection and redo the cost-benefit calculation I did before signing up for cryonics. (It came out in favor, using my best-guess probability of success between 1 and 5 percent; but it wouldn't have trumped the cost at, say, 0.1%.) To name another: if the BPF shows that cryopreservation makes a hash of synaptic connections, I'd explicitly re-do the cost-benefit calculation as well.
3Dreaded_Anomaly11y
Have you seen the comments by kalla724 in this thread? Edit: There's some further discussion here.
2Error11y
It seems to me that they're also questions of engineering feasibility. A thing can be provably possible and yet unfeasibly difficult to implement in reality. Consider the difference between, say, adding salt to water and getting it out again. What if the difference in cost and engineering difficulty between vitrifying and successfully de-vitrifying is similar? What if it turns out to be ten orders of magnitude greater? I think the most likely failure condition for cryonics tech (as opposed to cyronics organizations) isn't going to be that revival turns out to be impossible, but that revival turns out to be so unbelievably hard or expensive that it's never feasible to actually do. If it's physically and information-theoretically allowed to revive a person, but technologically impractical (even with Sufficiently Advanced Science), then its theoretical possibility doesn't help the dead much. I have the same concern about unbounded life extension, actually; but I find success in that area more probable for some reason. (personal disclosure: I'm not signed up for cryonics, but I don't give funny looks to people who are. Their screws seem a bit loose but they're threaded in the right direction. That's more than one can say for most of the world.)
0Izeinwinter11y
Getting aging to stop looks positively trivial in comparison - The average lifespan of different animals already varies /way/ to much for there to be any biological law underlying it. So turning senescence off altogether should be possible. I suspect evolution has not already done so because overly long-lived creatures in the wild were on average bad news for their bloodlines - banging their grand daughters and occupying turf with the cunning of the old. Uhm. Now I have an itch to set up a simulation and run it.. Just so stories are not proof. Math is proof.

My name is Itai Bar-Natan. I have been lurking here for a long time, more recently I start posting some things, but only now do I formally introduce myself.

I am in grade 11, and I began reading less wrong at grade 8 (introduced by Scott Aaronson's blog). I am a former math prodigy, and am currently taking one graduate-level course in it. This is the first time I am learning math under the school system (although I not the first time I attended math classes under the school system). Before that, I would learn from my parents, who are both mathematicians, or (later on) from books and internet articles.

Heedless of Feynman, I believe I understand quantum mechanics.

One weakness I am working to improve on is the inability to write in large quantities.

I have a blog here: http://itaibn.wordpress.com/

I consider less wrong as a fun time-waster and a community which is relatively sane.

5BerryPick611y
Are you, by any chance, related to Dror?
7itaibn011y
Yes, I am his son.
0BerryPick611y
To my eternal embarrassment, I was, as a youth, quite taken in by "The Bible Code." Very taken in, actually. That ended suddenly when someone directed to the material written by your father and McKay (I think?). Small world, I guess? :)
4wedrifid11y
Give her to Headless Feyn-man!
0itaibn011y
Typo fixed.

I'm Robby Oliphant. I started a few months ago reading HP:MoR, which led me to the Sequences, which led me here about two weeks ago. So far I have read comments and discussions solely as a spectator. But finally, after developing my understanding and beginning on the path set forth by the sequences, I remain silent no more.

I am fresh out of high school, excited about life and plan to become a teacher, eventually. My short-term plans involve going out and doing missionary work for my church for the next two years. When I came head on against the problem of being a rationalist and a missionary for a theology, I took a step back and had a crisis of belief, not the first time, but this time I followed the prescribed method and came to a modified conclusion, though I still find it rational and advantageous to serve my 2 year mission.

I find some of this difficult, some of this intuitive and some of this neither difficult or intuitive, which is extremely frustrating, how something can appears simple but defy my efforts to intuitively work it. I will continue to work at it because rationality seems to be praiseworthy and useful. I hope to find the best evidence about theology here. I don't mean evidence for or against, just the evidence about the subject.

3olibain11y
Hahaha! I find it heartening that that is your response to me wanting to be a teacher. I am quite aware that the system is broken. My personal way of explaining it: The school system works for what it was made to work for; avoiding responsibility for a failed product. * The parents are not responsible; the school taught their kids. * The students are not socially responsible; everything was compulsory, they had no choice to make. * Teachers are not to blame; they teach what they are told to teach and have the autonomy of a pre-AI computer intelligence. * The administrators are not to blame; They are not the students' parents or teachers. * The faceless, nameless committees that set the curriculum are not responsible, they formed then separated after setting forth the unavoidably terrible standards for all students of an arbitrary age everywhere. So the product fails but everyone did they're best. No nails stick out, no one gets hammered. I have high dreams of being the educator that takes down public education. If a teacher comes up with a new way of teaching or an important thing to teach, he can go to class the next day and test it. I have a hope of professional teachers; either trusted with the autonomy of being professionals, or actual professionals in their subject, teaching only those that want to learn. Also the literature on Mormons fromDesrtopa, Ford and Nisan I am thankful for. I enjoyed the Mormonism organizational post because I have also noticed how well the church runs. It is one reason I stay a Latter-Day Saint in this time of Atheism mainstreaming. The church is winning, it is well organized, service and family-oriented, and supports me as I study rationality and education. I can give examples, but I will leave my deeper insights for my future posts; I feel I am well introduced for now.
2Bugmaster11y
I would be quite interested to see a more detailed post regarding that last part. Of course, I am just some random guy on the Internet, but still :-)
0[anonymous]11y
I'd like to know how they [=consequentialist deists stuck in religions with financial obligations] justify tithing so much of their income to an ineffective charity.
0whowhowho11y
The Education system in the US, or the education system everywhere?
-2MugaSofer11y
Can't speak for Everywhere, but it's certainly not just the US. Ireland has much the same problem, although I think it's not quite as bad here.
0A1987dM11y
In Italy it's also very bad, but the public opinion does have a culprit in mind (namely, politics).
-2OrphanWilde11y
I love Mormonism. Possibly because I love Thus Spoke Zarathustra, and Mormonism seems to be at least partially inspired by it.
6gwern11y
That seems rather unlikely, inasmuch as the first English translation was in 1896 - by which point Smith had preached, died, the Mormons evacuated to Utah, begun proselytizing overseas and baptism of the dead, set up a successful state, disavowed polygamy, etc.
0OrphanWilde11y
There's also the fact that it wasn't even written until after Joseph Smith had died, translation not even being an issue. (In point of fact, Nietzsche was born the same year that Joseph Smith died.) Nonetheless! I am convinced a time traveler gave Joseph Smith the book.
2Desrtopa11y
I don't think you'll find much discussion of theology here, since in these parts religion is generally treated as an open and shut case. The archives of Luke Muelhauser's blog, Common Sense Atheism, are probably a much more abundant resource for rational analysis of theology; it documents his (fairly extensive) research into theological matters stemming from his own crisis of faith, starting before he became an atheist. Obviously, the name of the site is rather a giveaway as to the ultimate conclusion that he drew (I would have named it differently in his place,) and the foregone conclusion might be a bit mindkilling, but I think the contents will probably be a fair approximation of the position of most of the community here on religious theological matters, made more explicit than they generally are on Less Wrong.
2shminux11y
I would love to hear more details, both about the process and about the conclusion, if you are brave/foolish enough to share.
1Epiphany11y
I appreciate your altruistic spirit and your goal of gathering objective evidence regarding your religion. I'm glad to see you beginning on the path of improving your rationality! If you haven't encountered the term "effective altruist" yet or have not yet investigated the effective altruist organizations, I very much encourage you to investigate them! As a fellow altruistic rationalist, I can say that they've been inspiring to me and hope they're inspiring to you as well. I feel it necessary to inform you of something important yet unfortunate about your goal of becoming a teacher. I'm not happy to have to tell you this, but I am quite glad that somebody told you about it at the beginning of your adulthood: The school system is broken in a serious way. The problem is with the fundamental system, so it's not something teachers can compensate for. If you wish to investigate alternatives to becoming a standard school teacher, I would highly recommend considering becoming involved with effective altruists. An organization like THINK or 80,000 hours may be very helpful to you in determining what sorts of effective and altruistic things you might do with your skills. THINK does training for effective altruists and helps them figure out what to do with themselves. 80,000 hours helps people figure out how to make the most altruistic contribution with careers they already have. For information regarding religion, I recommend the blog of a former Christian (Luke Muehlhauser) as an addition to your reading list. That is here: Common Sense Atheism. I recommend this in particular because he completed the process you've started - the process of reviewing Christian beliefs - so Luke's writing may be able to save you significant time and provide you with information you may not encounter in other sources. Also, due to the fact that he began as a Christian, I'm guessing that his reasoning was not unnecessarily harsh toward Christian ideas like they might have been otherwise. Th
0Bugmaster11y
See also Lockhart's Lament (PDF link) . That said, in my own case, competent teachers (such as Lockhart appears to be) did indeed make a difference. Though my IQ is much closer to the population than the IQ of an average LWer's, so maybe my anecdotal evidence does not apply (not that it ever does, what with being anecdotal and all).
0Epiphany11y
I can't fathom that you'd say that if you had read Gatto's speech. I am very interested in the reaction you have to the speech (It's called The Seven Lesson School Teacher, and it's in the beginning of chapter 1). Would you indulge me? Also: Failing to teach reasoning skills in school is a crime against humanity.
8Bugmaster11y
I have, in fact, read the Speech before, quite some time ago. My point is that outstanding teachers can make a big positive difference in the students' lives (at least, that was the case for me), largely by deliberately avoiding some or all of the anti-patterns that Gatto lists in his Speech. We were also taught the basics of critical thinking in an English class (of all places), though this could've been a fluke (or, once again, a teacher's personal initiative). I should also point out that these anti-patterns are not ubiquitous. I was lucky enough to attend a school in another country for a few of my teenage years (a long, long time ago). During a typical week, we'd learn how to solve equations in Math class, apply these skills to exercises in Statistics, stage an experiment and record the results in Physics, then program in the statistics formulae and run them on our experimental results in Informatics (a.k.a. Computer Science). Ideas tend to make more sense when connections between them are revealed. I haven't seen anything like this in US-ian education, but I wouldn't be surprised to find out that some school somewhere in the US is employing such an approach. Edited to add: I share your frustration, but there's no need to overdramatize.
7Bugmaster11y
I should also point out that, while Gatto makes some good points, his overall thesis is hopelessly lost in all the hyperbole, melodrama, and outright conspiracy theorizing. He does his own ideas a disservice by presenting them the way he does. For example, I highly doubt that mental illnesses, television broadcasts, and restaurants would all magically disappear (as Gatto claims on pg. 8) if only we could teach our children some critical thinking skills.
-1Epiphany11y
Connection between education and sanity Check out Ed DeBono's CORT thinking system. His research (I haven't thoroughly reviewed it, just reciting from memory) shows that by increasing people's lateral thinking / creativity, it decreases things like their suicide rate. If you have been taught to see more options, you're less likely to choose to behave desperately and destructively. If you're able to reason things out, you're less likely to feel stuck and need help. If you're able to analyze, you're less likely to believe something batty. Would mental illness completely disappear? I don't think so. Sometimes conditions are mostly due to genes or health issues. But there are connections, definitely, between one's ability to think and one's sanity. If you don't agree with this, then do you also criticize Eliezer's method of raising the sanity waterline by encouraging people to refine their rationality? Connection between education and indulging in passive entertainment As for television, I think he's got a point. When I was 17, I realized that I was spending most of my free time watching someone else's life. I wasn't spending my time making my own life. If the school system makes you dependent like he says (and I believe it does) then you'll be a heck of a lot less likely to take initiative and do something. If your self-confidence depends on other expert's approval, it becomes hard to take a risk and go do your own project. If your creativity and analytical abilities are reduced, so too will be your ability to imagine projects for yourself to do and guide yourself while doing them. If your love for learning and working is destroyed, why would you want to do self-directed projects in the first place? And if you aren't doing your own projects your own way, that sucks a lot of the life and pleasure out of them. Fortunately, for me, a significant amount of my creativity, analytical abilities, and a significant amount of my passion for learning and working survived scho
6Bugmaster11y
I mostly agree with the things you say, but these are not the things that Gatto says. Your position is a great deal milder than his. In a single sentence, he claims that if only we could set up our schools the way he wants them to be set up, then social services would utterly disappear, the number of "psychic invalids" would drop to zero, "commercial entertainment of all sorts" would "vanish", and restaurants would be "drastically down-sized". This is going beyound hyperbole; this borders on drastic ignorance. For example, not all mental illnesses are caused by a lack of gumption. Many, such as clinical depression and schizophrenia, are genetic in nature, and will strike their victims regardless of how awesomely rational they are. Others, such as PTSD, are caused by psychological trauma and would fell even the mighty Gatto, should he be unfortunate enough to experience it. While it's true that most of the "commercial entertainment of all sorts" is junk, some of it is art; we know this because a lot of it has survived since ancient times, despite the proclamations of people who thought just like Gatto (only referring to oil paintings, phonograph records, and plain old-fashioned writing instead of electronic media). As an English teacher, it seems like Gatto should know this. And what's his beef with restaurants, anyway ? That's just... weird. Do you feel the same way about fiction books, out of curiosity ? If Eliezer claimed that raising the sanity waterline is the one magic bullet that would usher us into a new Golden Age, as we reclaim the faded glory of our ancestors, then yes, I would disagree with him too. But, AFAIK, he doesn't claim this -- unlike Gatto.
9wedrifid11y
I'm afraid this account has swung to the opposite extreme---to the extent that it is quite possibly further from the truth and more misleading than Gatto's obvious hyperbole. Schizophrenia is one of the most genetically determined of the well known mental health problems but even it is heavily dependent on life experiences. In particular, long term exposure to stressful environments or social adversity dramatically increases the risk that someone at risk for developing the condition will in fact do so. As for clinical depression, the implication that due to being 'genetic in nature' means that the environment in which an individual spends decades of growth and development in is somehow not important is utterly absurd. Genetics is again relevant in determining how vulnerable the individual is but the social environment is again critical for determining whether problems will arise.
2Bugmaster11y
That's a good point, I did not mean to imply that these mental illnesses are completely unaffected by environmental factors. In addition, in case of some illnesses such as depression, there are in fact many different causes that can lead to similar symptoms, so the true picture is a lot more complex (and is still not entirely well understood). However, this is very different from saying something like "schizophrenia is completely environmental", or even "if only people had some basic critical thinking skills, they'd never become depressed", which is how I interpreted Gatto's claims. For example even with a relatively low heritability rate, millions of people would still contract schizophrenia every year worldwide -- especially since many of the adverse life experiences that can trigger it are unavoidable. No amount of critical thinking will reduce the number of victims to zero. And that's just one specific disease among many, and we're not even getting into more severe cases such as Down's Syndrome. If Gatto thinks otherwise, then he's being hopelessly naive.
1Epiphany11y
I agree that saying "all these problems will disappear" is not the same as saying that "these problems will reduce". I felt the need to explain why the problems would reduce because I wasn't sure you saw the connections. I have to wonder if having a really well-developed intellect might offer some amount of protection against this. Whether Gatto's intellect is sufficiently well-developed for this is another topic. I don't know. I love not cooking. Actually, yes. When I am fully motivated, I can spend all my evenings doing altruistic work for years, reading absolutely no fiction and watching absolutely no TV shows. That level of motivation is where I'm happiest, so I prefer to live that way. I do occasionally watch movies during those periods, perhaps once a month, because rest is important (and because movies take less time to watch than a book takes to read, but are higher quality than television, assuming you choose them well).
4Bugmaster11y
I see the connections, but I do not believe that some of the problems Gatto wants to fix -- f.ex. the existence of television and restaurants -- are even problems at all. Sure, TV has a lot of terrible content, and some restaurants have terrible food, but that's not the same thing as saying that the very concept of these services is hopelessly broken. It probably would, but not to any great extent. I'm not a psychiatrist or a neurobiologist though, so I could be widely off the mark. In general, however, I think that Gatto is falling prey to the Dunning–Kruger effect when he talks about mental illness, economics, and many other things for that matter. For example, the biggest tool in his school-fixing toolbox is the free market; he believes that if only schools could compete against each other with little to no government regulation, their quality would soar. In practice, such scenarios tend to work out... poorly. That's fair, and your preferences are consistent. However, many other people see a great deal of value in fiction; some even choose to use it as a vehicle for transmitting their ideas (f.ex. HPMOR). I do admit that, in terms of raw productivity, I cannot justify spending one's time on reading fiction; if a person wanted to live a maximally efficient life, he would probably avoid any kind of entertainment altogether, fiction literature included. That said, many people find the act of reading fiction literature immensely useful (scientists and engineers included), and the same is true for other forms of entertainment such as music. I am fairly convinced that any person who says "entertainment is a waste of time" is committing a fallacy of false generalization.
0Epiphany11y
The existence of television technology isn't, in my opinion, a problem. Nor is the fact that some shows are low quality. Even if all of them were low quality, I wouldn't necessarily see that as a problem - it would still be a way of relaxing. The problem I see with television is that the average person spends 4 hours a day watching it. (Can't remember where I got that study, sorry.) My problem with that is not that they aren't exercising (they'd still have an hour a day which is plenty of exercise, if they want it) or that they aren't being productive (you can only be so productive before you run out of mental stamina anyway, and the 40 hour work week was designed to use the entirety of the average person's stamina) but that they aren't living. It could be argued that people need to spend hours every day imagining a fantasy. I was told by an elderly person once that before television, people would sit on a hill and daydream. I've also read that imagining doing a task correctly is more effective at making you better at it than practice. If that's true, daydreaming might be a necessity for maximum effectiveness and television might provide some kind of similar benefit. So it's possible that putting one's brain into fantasy mode for a few hours of day really is that beneficial. Spending four hours a day in fantasy mode is not possible for me (I'm too motivated to DO something) and I don't seem to need anywhere near that much daydreaming. I would find it very hard to deal with if I had spent that much of my free time in fantasy. I imagine that if asked whether they would have preferred to watch x number of shows, or spent all of that free time on getting out there and living, most people would probably choose the latter - and that's sad. I think that people would also have to have read the seven lessons speech for the problems he sees to be solved. Maybe eventually things would evolve to the point where schools would not behave this way anymore without them reading i
6wedrifid11y
I wonder if the author would agree that that is the most important information. I suspect he would not. (So naturally if you learning goals are different to the teaching goals of the author then their material will not be optimized for your intentions.)
-2Epiphany11y
It seems to me that the problem is what intention one has when one begins learning and whether one can deal with accepting the fact that they're biased, not how they go about learning them. Though, maybe Eliezer has put various protections in that get people questioning their intention and sells them on learning with the right intention. I would agree that if it did not occur to a person to use their knowledge of biases to look for their own mistakes, learning them could be really bad, but I do not think that learning a list of biases will all by itself turn me into an argument-wielding brain-dead zombie. If it makes you feel any better to know this, I've been seeking a checklist of errors against which I can test my ideas.
0olibain11y
Whoo! my post got the most recursion. Do I get a reward? If I get a few more layers it will be more siding than post.
0Bugmaster11y
That is one big reason behind my statement, yes. Currently, it looks like many, if not most, people -- in the Southern states, at least -- want their schools to engage in cultural indoctrination as opposed to any kind of rationality training. The voucher programs, which were designed specifically to introduce some free market into the education system, are being used to teach things like Creationism and historical revisionism. Which is not to say that public education in states like Louisiana and Texas is any better, seeing as they are implementing the same kinds of curricula by popular vote. In fact, most private schools are religious in nature. According to this advocacy site (hardly an unbiased source, I know), around 50% are Catholic. On the plus side, student performance tends to be somewhat better (though not drastically so) in private schools, according to CAPE as well as other sources. However, private schools are also quite a bit more expensive than public schools, with tuition levels somewhere around $10K (and often higher). This means that the students who attend them have much wealthier parents, and this fact alone can account for their higher performance. This leads me to my second point: I believe that Gatto is mistaken when he yearns for earlier, simpler times, where education was unencumbered by any regulation whatsoever, and students were free to learn (or to avoid learning) whatever they wanted. We do not live in such times anymore. Instead, we live in a world that is saturated by technology. Literacy, along with basic numeracy, are no longer marks of high status, but an absolute requirement for daily life. Most well-paying jobs, creative pursuits, as well as even basic social interactions all rely on some form of information technology. Basic education is not a luxury, but an essential service. Are public schools adequately providing this essential service ? No. However, we simply cannot afford to live in a world where access to it is gated by
0Bugmaster11y
What does "living" mean, exactly ? I understand that you find your personal creative projects highly enjoyable, and that's great. But you aren't merely saying, "I enjoy X", you're saying, "enjoying Y instead of X is objectively wrong" (if I understand you correctly). I address this point below, but I'd like to also point out that some people people's goals are different from yours. They consume entertainment because it is enjoyable, or because it facilitates social contact (which they in turn find enjoyable), not because they believe it will make them more efficient (though see below). Many people -- yourself not among them, admittedly -- find that they are able to internalize new ideas much more thoroughly if these ideas are tied into a narrative. Similarly, other people find it easier to communicate their ideas in the form of narratives; this is why Eliezer writes things like Three Worlds Collide and HPMOR instead of simply writing out the equations. This is also why he employs several tropes from fiction even in his non-fiction writing. I'm not saying that this is the "right" way to learn, or anything; I am merely describing the situation that, as I believe, exists. I am just not convinced that this statement applies to anything like a majority of "person+idea" combinations.
2Epiphany11y
"Living" the way I used it means "living to the fullest" or, a little more specifically "feeling really engaged in life" or "feeling fulfilled". I used "living" to refer to a subjective state. There's nothing objective about it, and IMO, there's nothing objectively right or wrong about having a subjective state that is (even in your own opinion) not as good as the ideal. I feel like your real challenge here is more similar to Kawoomba's concern. Am I right? Do you find it more enjoyable to passively watch entertainment than to do your own projects? Do you think most people do? If so, might that be because the fun was taken out of learning, or people's creativity was reduced to the point where doing your own project is too challenging, or people's self-confidence was made too dependent on others such that they don't feel comfortable pursuing that fulfilling sense of having done something on their own? I puzzle at how you classify watching something together as "social contact". To me, being in the same room is not a social life. Watching the same entertainment is not quality time. The social contact I yearn for involves emotional intimacy - contact with the actual person inside, not just a sense of being in the same room watching the same thing. I don't understand how that can be called social contact. I've been thinking about this and I think what might be happening is that I make my own narratives. This, I can believe about Eliezer. There are places where he could have been more incisive but is instead gets wordy to compensate. That's an interesting point. Okay, so to clarify, your position is that entertainment is a more efficient way to learn?
2Bugmaster11y
I understand that you do not feel fulfilled when watching TV, but other people might. I would agree with your reply on Kawoomba's sub-thread: For better or for worse, passive entertainment such as movies, books, TV shows, music, etc., is a large part of our popular culture. You say: Strictly speaking this is true, but people usually discuss the things they watch (or read, or listen to, etc.), with their friends or, with the advent of the Internet, even with random strangers. The shared narratives thus facilitate the "emotional intimacy" you speak about. Furthermore, some specific works of passive entertainment, as well as generalized common tropes, make up a huge chunk of the cultural context without which it would be difficult to communicate with anyone in our culture on an emotional level (as opposed to, say, presenting mathematical proofs or engineering schematics to each other). For example, if you take a close look at various posts on this very site, you will find references to the genres of science fiction and fantasy, as well as media such as movies or anime, which the posters simply take for granted (sometimes too much so, IMO; f.ex., not everyone knows what "tsuyoku naritai" means right off the bat). A person who did not share this common social context would find it difficult to communicate with anyone here. Note, though, that once again I am describing a situation that exists, not prescribing a behavior. In terms of raw productivity per unit of time, I cannot justify any kind of entertainment at all. While it is true that entertainment has been with us since the dawn of civilization, so has cancer; just because something is old, doesn't mean that it's good. No, this phrasing is too strong. I meant what I said before: many people find it easier to internalize new ideas when they are presented as part of a narrative. This doesn not mean that entertainment is a more efficient way to learn all things for all people, or that it is objectively the best tec
4Desrtopa11y
Why try to justify entertainment in terms of productivity per time? Is there any reason this makes more sense than, say, justifying productivity in terms of how much entertainment it allows for?
2Bugmaster11y
Presumably, if your goal is to optimize the world, or to affect any part of it besides yourself in a non-trivial way, you should strive to do so as efficiently as possible. This means that spending time on any activities that do not contribute to this goal is irrational. A paperclip maximizer, for example, wouldn't spend any time on watching soap operas or reading romance novels -- unless doing so would lead to more paperclips (which is unlikely). Of course, one could argue that consumption of passive entertainment does contribute to the average human's goals, since humans are unable to function properly without some downtime. But I don't know if I'd go so far as to claim that this is a feature, and not a bug, just like cancer or aging or whatever else evolution had saddled us with.
6Richard_Kennaway11y
A decision theory that leads to the conclusion that we should all work like slaves for a future paradise, the slightest lapse incurring a cost equivalent to untold numbers of dead babies, and the enormity of the task meaning that we shall never experience it ourselves, is prima facie a broken decision theory. I'd even call it the sort of toxic mindwaste that RationalWiki loves to mock. Once you've built that optimised world, who gets to slack off and just live in it, and how will they spend their time?
5Viliam_Bur11y
Why exactly? I mean, my intuition also tells me it's wrong... but my intuition has a few assumptions that disagree with the proposed scenario. Let's make sure the intuition does not react to a strawman. For example, when in real life people "work like slaves for a future paradise", the paradise often does not happen. Typically, the people have a wrong model of the world. (The wrong model is often provided by their leader, and their work in fact results in building their leader's personal paradise, nothing more.) And even if their model is right, their actions are more optimized for signalling effort than for real efficiency. (Working very hard signals more virtue than thinking and coming up with a smart plan to make a lot of money and pay someone else to do more work than we could.) Even with smart and honest people, there will typically be something they ignored or could not influence, such as someone powerful coming and taking the results of their work, or a conflict starting and destroying their seeds of the paradise. Or simply their internal conflicts, or lack of willpower to finish what they started. The lesson we should take from this is that even if we have a plan to work like a slaves for a future paradise, there is very high prior probability that we missed something important. Which means that in fact we do not work for a future paradise, we only mistakenly think so. I agree that the prior probability is so high that even the most convincing reasoning and plans are unlikely to overweight it. However, for the sake of experiment, imagine that Omega comes and tells you that if you will work like a slave for the next 20 or 50 years, the future paradise will happen with probability almost 1. You don't have to worry about mistakes in your plans, because either Omega verified their correctness, or is going to provide you corrections when needed and predicts that you will be able to follow those corrections successfully. Omega also predicts that it you commit t
2Richard_Kennaway11y
When Omega enters a discussion, my interest in it leaves.
1wedrifid11y
To that extent that someone is unable to use established tools of thought to focus attention on the important aspects of the problem their contribution to a conversation is likely to be negative. This is particularly the case when it comes to decision theory where it correlates strongly with pointless fighting of the counterfactual and muddled thinking.
1Richard_Kennaway11y
Omega has its uses and its misuses. I observe the latter on LW more often than the former. The present example is one such. And in future, if you wish to address a comment to me, I would appreciate being addressed directly, rather than with this pseudo-impersonal pomposity.
4wedrifid11y
I intended the general claim as stated. I don't know you well enough for it to be personal. I will continue to support the use of Omega (and simplified decision theory problems in general) as a useful way to think. For practical purposes pronouncements like this are best interpreted as indications that the speaker has nothing of value to say on the subject, not as indications that the speaker is too sophisticated for such childish considerations.
-2Richard_Kennaway11y
For practical purposes pronouncements like this are best interpreted as saying exactly what they say. You are, of course, free to make up whatever self-serving story you like around it.
0[anonymous]11y
This is evidently not a behavior you practice.
0Peterdjones11y
It is counterintuitive that you should slave for people you don't know, perhaps because you can't be sure you are serving their needs effectively. Even if that objection is removed by bringing in an omniscient oracle,there still seems to be a problem because the prospect of one generation slaving to create paradise for another isn't fair. the simple version of utilitiarianism being addressed here only sums individual utilities, and us blind to things that can only be defined at the group level like justice and equaliy.
0A1987dM11y
For the sake of experiment, imagine that air has zero viscosity. In this scenario, would a feather and a cannon ball fall in the same time?
0Bugmaster11y
I believe the answer is "yes", but I had to think about that for a moment. I'm not sure how that's relevant to the current discussion, though. I think your real point might be closer to something like, "thought experiments are useless at best, and should thus be avoided", but I don't want to put words into anyone's mouth.
0A1987dM11y
My point was something like, “of course if you assume away all the things that cause slave labour to be bad then slave labour is no longer bad, but that observation doesn't yield much of an insight about the real world”.
0Bugmaster11y
That makes sense, but I don't think it's what Viliam_Bur was talking about. His point, as far as I could tell, was that the problem with slave labor is the coercion, not the labor itself.
4Jack11y
"Decision theory" doesn't mean the same thing as "value system" and we shouldn't conflate them.
2Peterdjones11y
Yep. A morality that leads to the conclusion that we should all work like slaves for a future paradise, the slightest lapse incurring a cost equivalent to untold numbers of dead babies, and the enormity of the task meaning that we shall never experience it ourselves, is prima facie a broken morality.
1Bugmaster11y
Why ? I mean, I do agree with you personally, but I don't see why such a decision theory is objectively bad. You ask, But the answer depends entirely on your goals. These can be as relatively modest as, "the world will be just like it is today, but everyone wears a party hat". Or it could be as ambitious as, "the world contains as many paperclips as physically possible". In the latter case, if you asked the paperclip maximizer "who gets to slack off ?", it wouldn't find the question relevant in the least. It doesn't matter who gets to do what, all that matters are the paperclips. You might argue that a paperclip-filled world would be a terrible place, and I agree, but that's just because you and I don't value paperclips as much as Clippy does. Clippy thinks your ideal world is terrible too, because it contains a bunch of useless things like "happy people in party hats", and not nearly enough paperclips. However, imagine if we ran two copies of Clippy in a grand paperclipping race: one that consumed entertainment by preference, and one that did not. The non-entertainment version would win every time. Similarly, if you want to make the world a better place (whatever that means for you), every minute you spend on doing other things is a minute wasted (unless they are explicitly included in your goals). This includes watching TV, eating, sleeping, and being dead. Some (if not all) of such activities are unavoidable, but as I said, I'm not sure whether it's a bug or a feature.
5Richard_Kennaway11y
This is proving the conclusion by assuming it. The words make a perfectly logical pattern, but I find that the picture they make is absurd. The ontology has gone wrong. Some businessman wrote a book of advice called "Never Eat Alone", the title of which means that every meal is an opportunity to have a meal with someone to network with. That is what the saying "he who would be Pope must think of nothing else" looks like in practice. Not wearing oneself out like Superman in the SMBC cartoon, driven into self-imposed slavery by memetic immune disorder. BTW, for what it's worth, I do not watch TV. And now I am imagining a chapter of that book entitled "Never Sleep Alone".

Some businessman wrote a book of advice called "Never Eat Alone", the title of which means that every meal is an opportunity to have a meal with someone to network with. That is what the saying "he who would be Pope must think of nothing else" looks like in practice. Not wearing oneself out like Superman in the SMBC cartoon, driven into self-imposed slavery by memetic immune disorder.

Actually, I think that the world described in that SMBC cartoon is far preferable to the standard DC comics world with Superman. I do not think that doing what Superman did there is a memetic immune disorder, but rather a (successful) attempt to make the world a better place.

2Richard_Kennaway11y
You would, then, not walk away from Omelas?

I definitely wouldn't. A single tormented child seems to me like an incredibly good tradeoff for the number of very high quality lives that Omelas supports, much better than we get with real cities.

It sucks to actually be the person whose well-being is being sacrificed for everyone else, but if you're deciding from behind a veil of ignorance which society to be a part of, your expected well being is going to be higher in Omelas.

Back when I was eleven or so, I contemplated this, and made a precommitment that if I were ever in a situation where I'm offered a chance to improve total wellfare for everyone at the cost of personal torment, I should take it immediately without giving myself any time to contemplate what I'd be getting myself into, so in that sense I've effectively volunteered myself to be the tormented child.

I don't disagree with maximally efficient altruism, just with the idea that it's sensible to judge entertainment only as an instrumental value in service of productivity.

2drnickbone11y
You're assuming here that the "veil of ignorance" gives you exactly equal chance of being each citizen of Omelas, so that a decision under the veil reduces to average utilitarianism. However, in Rawls's formulation, you're not supposed to assume that; the veil means you're also entirely ignorant about the mechanism used to incarnate you as one of the citizens, and so must consider all probability distributions over the citizens when choosing your society. In particular, you must assign some weight to a distribution picked by a devil (or mischievous Omega) who will find the person with the very lowest utility in your choice of society and incarnate you as that person. So you wouldn't choose Omelas. This seems to be why Rawls preferred maximin decision theory under the veil of ignorance rather than expected utility decision theory.
6Desrtopa11y
In that case, don't use a Rawlsian veil of ignorance, it's not the best mechanism for addressing the decision. A veil where you have an equal chance of your own child being the victim to anyone else's (assuming you're already too old to be the victim) is more the sort of situation anyone actually deciding whether or not to live in Omelas would face. Of course, I would pick Omelas even under the Rawlsian veil, since as I've said I'm willing to be the one who takes the hit.
0drnickbone11y
Ah, so you are considering the question "If Omelas already exists, should I choose to live there or walk away?" rather than the Rawlsian question "Should we create a society like Omelas in the first place?" The "veil of ignorance" meme nearly always refers to the Rawlsian concept, so I misunderstood you there. Incidentally, I reread the story and there seems to be no description of how the child was selected in the first place or how he/she is replaced. So it's not clear that your own child does have the same chance of being the victim as anyone else's.
6Desrtopa11y
Well, as I mentioned in another comment some time ago (not in this thread,) I support both not walking away from Omelas, and also creating Omelases unless an even more utility efficient method of creating happy and functional societies is forthcoming. Our society rests on a lot more suffering than Omelas, not just in an incidental way (such as people within our cities who don't have housing or medical care,) but directly, through channels such as economic slavery where companies rely on workers, mainly abroad, who they keep locked in debt, who could not leave to seek employment elsewhere even if they wanted to and other opportunities were forthcoming. I can respect a moral code that would lead people to walk out on Omelas as a form of protest that would also lead people to walk out on modern society to live on a self sufficient seasteading colony, but I reject the notion that Omelas is worse than, or as bad as, our own society, in a morally relevant way.
0shminux11y
I cannot fathom why a comment like that would be upvoted by anyone but an unfeeling robot. This is not even the dust-specks-vs-torture case, given that the Omelas is not a very large city. Imagine that it is not you, but your child you must sacrifice. Would you shrug and say "sorry, my precious girl, you must suffer until you die so that your mommy/daddy can live a happy life"? I know what I would do.
8Desrtopa11y
I hope I would have the strength to say "sorry, my precious girl, you must suffer until you die so that everyone in the city can live a happy life." Doing it just for myself and my own social circle wouldn't be a good tradeoff, but those aren't the terms of the scenario. Considering how many of our basic commodities rely on sweatshop or otherwise extremely miserable labor, we're already living off the backs of quite a lot of tormented children.
-8shminux11y
7drethelin11y
the real problem with omelas: It totally ignores the fact that there are children suffering literally as we speak in every city on the planet. Omelas somehow managed to get it down to one child. How many other children would you sacrifice for your own?
1shminux11y
Unlike in the fictional Omelas, there is no direct dependence or direct sacrifice. Certainly it is possible to at least temporarily alleviate suffering of others in this non-hypothetical world by sacrificing some of your fortune, but that's the difference between active and passive approach, there is a large gap there.
0satt11y
Related. Nornagest put their finger on this being a conflict between the consequentially compelling (optimizing for general welfare) and the psychologically compelling (not being confronted with knowledge of an individual child suffering torture because of you). I think Nornagest's also right that a fully specified Omelas scenario would almost certainly feel less compelling, which is one reason I'm not much impressed by Le Guin's story.
3Bugmaster11y
The situation is not analogous, since sacrificing one's child would presumably make most parents miserable for the rest of their days. In Omelas, however, the sacrifice makes people happy, instead.
0[anonymous]11y
And I thought that the Babyeaters only existed in Eliezer's fiction...
0Bugmaster11y
As I said in previous comments, I am genuinely not sure whether entertainment is a good terminal goal to have. By analogy, I absolutely require sleep in order to be productive at all in any capacity; but if I could swallow a magic pill that removed my need for sleep (with no other side-effects), I'd do so in a heartbeat. Sleep is an instrumental goal for me, not a terminal one. But I don't know if entertainment is like that or not. Thus, I'm really interested in hearing more about your thoughts on the topic.
0Desrtopa11y
I'm not sure that I would regard entertainment as a terminal goal, but I'm very sure I wouldn't regard productivity as one. As an instrumental goal, it's an intermediary between a lot of things that I care about, but optimizing for productivity seems like about as worthy a goal to me as paperclipping.
0Bugmaster11y
Right, agreed, but "productivity" is just a rough estimate of how quickly you're moving towards your actual goals. If entertainment is not one of them, then either it enhances your productivity in some way, or it reduces it, or it has no effect (which is unlikely, IMO). Productivity and fun aren't orthogonal; for example, it is entirely possible that if your goal is "experience as much pleasure as possible", then some amount of entertainment would directly contribute to the goal, and would thus be productive. That said, though, I can't claim that such a goal would be a good goal to have in the first place.
0Bugmaster11y
How so ? Imagine that you have two identical paperclip maximizers; for simplicity's sake, let's assume that they are not capable of radical self-modification (though the results would be similar if they were). Each agent is capable of converting raw titanium to paperclips at the same rate. Agent A spends 100% of its time on making paperclips. Agent B spends 80% of its time on paperclips, and 20% of its time on watching TV. If we gave A and B two identical blocks of titanium, which agent would finish converting all of it to paperclips first ? FeepingCreature addressed this better than I could in this comment . I understand that you find the idea of making paperclips (or political movements, or software, or whatever) all day every day with no breaks abhorrent, and so do I. But then, some people find polyamory abhorrent as well, and then they "polyhack" themselves and grow to enjoy it. Is entertainment your terminal value, or a mental bias ? And if it is a terminal value, is it the best terminal value that you could possibly have ?
1Richard_Kennaway11y
WARNING: This comment contains explicit discussion of an information hazard. I decline to do so. What imaginary creatures would choose whose choice has been written into their definition is of no significance. (This is also a reply to the comment of FeepingCreature you referenced.) I'm more interested in the practical question of how actual human beings, which this discussion began with, can avoid the pitfall of being taken over by a utility monster they've created in their own heads. This is a basilisk problem. Unlike Roko's, which depends on exotic decision theory, this one involves nothing more than plain utilitarianism. Unlike the standard Utility Monster scenario, this one involves no imaginary entities or hypothetical situations. You just have to look at the actual world around you through the eyes of utilitarianism. It's a very short road from the innocent-sounding "the greatest good for the greatest number" to this: There are seven billion people on this planet. How can the good you could do them possibly be outweighed by any amount of your own happiness? Just by sitting there reading LessWrong you're killing babies! Having a beer? You're drinking dead babies. Own a car? You're driving on a carpet of dead babies! Murderer! Murderer! Add a dash of transhumanism and you can up the stakes to an obligation to bringing about billions of billions of future humans throughout the universe living lives billions of times better than ours. But even Peter Singer doesn't go that far, continuing to be an academic professor and paying his utilitarian obligations by preaching utilitarianism and donating twenty percent of his salary to charity. This is such an obvious failure mode for utilitarianism, a philosophy at least two centuries old, that surely philosophers must have addressed it. But I don't know what their responses are. Christianity has the same problem, and handles it in practice by testing the vocation of those who come to it seeking to devote their whole l
5TheOtherDave11y
Consider two humans, H1 and H2, both utilitarians. H1 looks at the world the way you describe Peter Singer here. H2 looks at the world "through the eyes of utilitarianism" as you describe it here. My expectation is that H1 will do more good in their lifetime than H2. What's your expectation?
0A1987dM11y
And then you have people like H0, who notices H2 is crazy, decides that that means that they shouldn't even try to be altruistic, and accuses H1 of hypocrisy because she's not like H2. (Exhibit A)
0Richard_Kennaway11y
That is my expectation also. However, persuading H2 of that ("but dead babies!") is likely to be a work of counselling or spiritual guidance rather than reason.
4TheOtherDave11y
Well... so, if we both expect H1 to do more good than H2, it seems that if we were to look at them through the eyes of utilitarianism, we would endorse being H1 over being H2. But you seem to be saying that H2, looking through the eyes of utilitarianism, endorses being H2 over being H1. I am therefore deeply confused by your model of what's going on here.
0Richard_Kennaway11y
Oh yes, H1 is more effective, heathier, saner, more rational, etc. than H2. H2 is experiencing existential panic and cannot relinquish his death-grip on the idea.
4TheOtherDave11y
You confuse me further with every post. Do you think being a utilitarian makes someone less effective, healthy, sane, rational etc.? Or do you think H2 has these various traits independent of them being a utilitarian?
2whowhowho11y
There's a lot of different kinds of utilitarian.
0Richard_Kennaway11y
WARNING: More discussion of a basilisk, with a link to a real-world example. It's a possible failure mode of utilitarianism. Some people succumb to it (see George Price for an actual example of a similar failure) and some don't. I don't understand your confusion and this pair of questions just seems misconceived.
2TheOtherDave11y
(shrug) OK. I certainly agree with you that some utilitarians suffer from the existential panic and inability to relinquish their death-grips on unhealthy ideas, while others don't. I'm tapping out here.
2whowhowho11y
One could reason that one is better placed to do good effectively when focussing on oneself, ones family, one's community, etc, simply because one understands them better.
0A1987dM11y
(Warning: replying to discussion of a potential information hazard.) Gung'f na rknttrengvba (tvira gung ng gung cbvag lbh unqa'g nqqrq zragvbarq genafuhznavfz lrg) -- nf bs abj, vg'f rfgvzngrq gb gnxr zber guna gjb gubhfnaq qbyynef gb fnir bar puvyq'f yvsr jvgu Tvirjryy'f gbc-engrq punevgl. (Be vf ryrpgevpvgl naq orre zhpu zber rkcrafvir jurer lbh'er sebz?)
0Eliezer Yudkowsky11y
Infohazard reference with no warning sign. Edit and reply to this so I can restore.
2Richard_Kennaway11y
Done. Sorry this took so long, I've been taken mostly offline by a biohazard for the last week.
0Bugmaster11y
Are you saying that human choices are not "written into their definition" in some measure ? Also, keep in mind that a goal like "make more paperclips" does leave a lot of room for other choices. The agent could spend its time studying metallurgy, or buying existing paperclip factories, or experimenting with alloys, or attempting to invent nanotechnology, or some combination of these and many more activities. It's not constrained to just a single path. On the one hand, I do agree with you, and I can't wait to see your proposed solution. On the other hand, I'm not sure what this has to do with the topic. I wasn't talking about billions of future humans or anything of the sort, merely about a single (semi-hypothetical) human and his goals; whether entertainment is a terminal or instrumental goal; and whether it is a good goal to have. Let me put it in a different way: if you could take a magic pill which would remove (or, at the very least, greatly reduce) your desire for passive entertainment, would you do it ? People with extremely low preferences for passive entertainment do exist, after all, so this scenario isn't entirely fantastic (other than for the magic pill part, of course).
0whowhowho11y
What is written in to humans by evolution is hardly relevant. The point is that you can't prove anything about humansby drawing a comparison with imaginary creatures that have had something potentially quite different written into them by their creator.
0Richard_Kennaway11y
I have no idea what that even means. My only solution is "don't do that then". It's a broken thought process, and my interest in it ends with that recognition. Am I a soul doctor? I am not. I seem to be naturally resistant to that failure, but I don't know how to fix anyone who isn't. What desire for passive entertainment? For that matter, what is this "passive entertainment"? I am not getting a clear idea of what we are talking about. At any rate, I can't imagine "entertainment" in the ordinary meaning of that word being a terminal goal. FWIW, I do not watch television, and have never attended spectator sports. Quite.
0Bugmaster11y
To rephrase: do you believe that all choices made by humans are completely under the humans' conscious control ? If not, what proportion of our choices is under our control, and what proportion is written into our genes and is thus difficult, if not impossible, to change (given our present level of technology) ? You objected to my using Clippy as an analogy to human behaviour, on the grounds that Clippy's choices are "written into its definition". My point is that a). Clippy is free to make whatever choices it wants, as long as it believes (correctly or erroneously) such choices would lead to more paperclips, and b). we humans operate in a similar way, only we care about things other than paperclips, and therefore c). Clippy is a valid analogy. Don't do what ? Do you have a moral theory which works better than utilitarianism/consequentialism ? You don't watch TV or attend sports, but do you read any fiction books ? Listen to music ? Look at paintings or sculptures (on your own initiative, that is, and not as part of a job) ? Enjoy listening to some small subclass of jokes ? Watch any movies ? Play video games ? Stare at a fire at night ? I'm just trying to pinpoint your general level of interest in entertainment. Just because you personally can't imagine something, doesn't mean it's not true. For example, art and music -- both of which are forms of passive entertainment -- has been a part of human history ever since the caveman days, and continue to flourish today. There may be something hardcoded in our genes (maybe not yours personally, but on average) that makes us enjoy art and music. On the other hand, there are lots of things hardcoded in our genes that we'd be better off without...
0Richard_Kennaway11y
The whole language is wrong here. What does it mean to talk about a choice being "completely under the humans' conscious control"? Obviously, the causal connections wind through and through all manner of things that are outside consciousness as well as inside. When could you ever say that a decision is "completely under conscious control"? Then you talk as if a decision not "completely under conscious control" must be "written into the genes". Where does that come from? Why do you specify fiction? Is fiction "passive entertainment" but non-fiction something else? What is this "us" that is separate from and acted upon by our genes? Mentalistic dualism? Don't crash and burn. I have no moral theory and am not impressed by anything on offer from the philosophers. To sum up, there's a large and complex set of assumptions behind everything you're saying here that I don't think I share, but I can only guess at from glimpsing the shadowy outlines. I doubt further discussion will get anywhere useful.
0whowhowho11y
I think Bugmaster is equating being "written in" in the sense of a stipulation in a thought experiment with being "written in" in the sense of being the outcome of an evolutionary process.
-1Richard_Kennaway11y
If he is, he shouldn't. These are completely different concepts.
0whowhowho11y
That has no relevance to morality. Morality is not winning, is not efficiently fulfilling an arbitrary UF.
2IlyaShpitser11y
This decision theory is bad because it fails the "Scientology test."
5FeepingCreature11y
That's hardly objective. The challenge is to formalize that test. Btw: the problem you're having is not due to any decision theory but due to the goal system. You want there to be entertainment and fun and the like. However, the postulated agent had a primary goal that did not include entertainment and fun. This seems alien to us, but for the mindset of such an agent "eschew entertainment and fun" is the correct and sane behavior.
0Bugmaster11y
Exactly, though see my comment on a sibling thread. Out of curiosity though, what is the "Scientology test" ? Is that some commonly-accepted term from the Less Wrong jargon ? Presumably it doesn't involve poorly calibrated galvanic skin response meters... :-/
4FeepingCreature11y
Not the commenter, but I think it's just "it makes you do crazy things, like scientologists". It's not a standard LW thing.
0A1987dM11y
Optimize it for what?
2Bugmaster11y
That is kind of up to you. That's the problem with terminal goals...
0A1987dM11y
Music is only passive entertainment if you just listen at it, not if you sing it, play it, or dance at it. I agree that people spend lots of time talking about these kind of things, and that the more shared topics of conversation you have with someone the easier it is to socialize with them, but I disagree that there are few non-technical things one can talk about other than what you get from passive entertainment. I seldom watch TV/films/sports, but I have plenty of non-technical things I can talk about with people -- parties we've been to, people we know, places we've visited, our tastes in food and drinks, unusual stuff that happened to us, what we've been doing lately, our plans for the near future, ranting about politics, conspiracy theories, the freakin' weather, whatever -- and I'd consider talking about some of these topic to build more ‘emotional intimacy’ than talking about some Hollywood movie or the Champions League or similar. (Also, I take exception to the apparent implication of the parenthetical at the end of the paragraph -- it is possible to entertain people by talking about STEM topics, if you're sufficiently Feynman-esque about that.) I have read very little of that kind of fiction, and still I haven't felt excluded by that in the slightest (well, except that one time when the latest HPMOR thread clogged up the top Discussion comments of the week when I hadn't read HPMOR yet, and the occasional Discussion threads about MLP -- but that's a small minority of the time).
0Bugmaster11y
This article, courtesy of the recent Seq Rerun, seems serendipitous: http://lesswrong.com/lw/yf/moral_truth_in_fiction/
-1Kawoomba11y
What's wrong with live and let live (for their notion of 'living'). You can value "DO"ing something (apparently not counting daydreaming) over other activities for yourself, that's your prerogative, but why do you get to say who is and isn't "living"?
4Epiphany11y
That was addressed here: It's not that I want to tell them whether they're "really living", it's that I think they don't think spending so much of their free time on TV is "really living". Now, if you want to disagree with me on whether they think they are "really living", that might be really interesting. I acknowledge that mind projection fallacy might be causing me to think they want what I want.
4taelor11y
I suspect that many people who enjoy television, if asked, would claim that socializing with freinds or other things are somehow better or more pure, but only because TV is a low status medium, and so saying that watching TV isn't "real living" has become somewhat of a cached thought within our culture; I'd suspect you'd have a much harder time finding people who will claim that spending time enjoying art or reading classic literature or other higher status fictional media doesn't count as "real living".
2Nornagest11y
I think I might actually expect people to endorse different activities in this context at different levels of abstraction. That is, if you asked J. Random TV Consumer to rank (say) TV and socialization, or study, or some other venue for self-improvement, I wouldn't be too surprised if they consistently picked the latter. But if you broke down these categories into specific tasks, I'd expect individual shows to rate more highly -- in some cases much more highly -- than implied by the category rating. I'm not sure what this implies about true preferences.
0Epiphany11y
I think I need an example of this to understand your point here.
2Nornagest11y
Well, for example, I wouldn't be too surprised to find the same person saying both "I'd rather socialize than watch TV" and "I'd rather watch Game of Thrones [or other popular TV show] than call my friend for dinner tonight". Of course that's just one specialization, and the plausibility of a particular scenario depends on personality and relative appeal.
0MugaSofer11y
Offtopic: Does anyone know where you can find that speech in regular HTML format? I defenitely read it in that format, but I can't find it again. Ontopic: While I appreciate (and agree with) the point he's making, overall, he uses a lot of exaggeration and hyperbole, at best. It seems pretty clear that specific teachers can make a difference to individuals, even if they can't enact structural change. Also: What do you mean by "crime against humanity"?
0Bugmaster11y
I could've sworn that I saw his entire book in HTML format somewhere, a long time ago, but now I can't find it. Perhaps I only imagined it. From what I recall, in the later chapters he claims that our current educational system was deliberately designed in meticulous detail by a shadowy conspiracy of statists bent on world (or, at the very least, national) domination. Again, my recollection could be widely off the mark, but I do seem to remember staring at my screen and thinking, "Really, Gatto ? Really ?"
2Nornagest11y
I read Dumbing Us Down, which might not be the book you're thinking of -- if memory serves, he's written a few -- but I don't remember him ever quite going with the conspiracy theory angle. He skirts the edges of it pretty closely, granted. In the context of history of education, his thesis is basically that the American educational system is an offshoot of the Prussian system and that that system was picked because it prioritizes obedience to authority. Even if we take that all at face value, though, it doesn't require a conspiracy -- just a bunch of 19th- and early 20th-century social reformers with a fondness for one of the more authoritarian regimes of the day, openly doing their jobs. Now, while it's pretty well documented that Horace Mann and some of his intellectual heirs had the Prussian system in mind, I've never seen historical documentation giving exactly those reasons for choosing it. And in any case the systems diverged in the mid-1800s and we'd need to account for subsequent changes before stringing up the present-day American school system on those charges. But at its core it's a pretty plausible hypothesis -- many of the features that after two World Wars make the Prussians look kind of questionable to us were, at the time, being held up as models of national organization, and a lot of that did have to do with regimentation of various kinds.
-2MugaSofer11y
Speaking as a rationalist and a Christian, I've always found that a bit too propaganda-ish for my tastes. And I wouldn't call Luke's journey "completed", exactly. Still, it can be valuable to see what others have thought in similar positions to you, in a shoulders-of-giants sort of way. I think it would be better to focus on improving your rationality, rather than seeking out tracts that disagree with you. There's nothing wrong with reading such tracts, as long as you're rational enough not to internalize mistakes from it (on either side) but I wouldn't make it your main goal.
0Bugmaster11y
What does "evidence about X" mean, as opposed to "evidence for X" ?

My interpretation is "evidence that was not obtained in the service of a particular bottom line."

4Desrtopa11y
I'd interpret it as "evidence which bears on the question X" as opposed to "Evidence which supports answer Y to question X." For instance, if you wanted to know whether anthropogenic climate change was occurring, you would want to search for "evidence about anthropogenic climate change" rather than "evidence for anthropogenic climate change."
0Bugmaster11y
Fair enough, that makes sense. I guess I just wasn't used to seeing this verbal construct before.
0[anonymous]11y
The former means that log(P(E|X)/P(E|~X)) is non-negligible, the latter means that it is positive.
0Ford11y
You may find this story (a scientist dealing with evidence that conflicts with his religion) interesting. http://www.exmormonscholarstestify.org/simon-southerton.html
-1Nisan11y
Sam Bhagwat has served a mission and has posted here about how to emulate the Latter-day Saints' approach to community-building..

I'm here to make one public prediction that I want to be as widely-read as possible. I'm here to predict publicly that the apparent increase in autism prevalence is over. It's important to predict it because it distinguishes between the position that autism is increasing unstoppably for no known reason (or because of vaccines) and the position that autism has not increased in prevalence, but diagnosis has increased in accuracy and a greater percentage of people with autism spectrum disorders are being diagnosed. It's important that this be as widely-read as possible as soon as possible because the next time prevalence estimates come out, I will be shown right or wrong. I want my theory and prediction out there now so that I can show that I predicted a surprising result before it happened. While many people are too irrational to be surprised when they see this result even though they have predicted the opposite, I hope that rationalists will come to believe my position when it is proven right. I hope that everyone disinterested will come to believe this. The reason why I hope this is because I want them to be more likely to listen to me when I make statements about human rights as they apply to people with autism spectrum disorders. It is important that society change its attitudes toward such individuals.

Please help me by upvoting me to two karma so I can post in the discussion section.

7AdeleneDawner11y
I'm not sure you're right that we won't see any increase in autism prevalance - there are still some groups (girls, racial minorities, poor people) that are "underserved" when it comes to diagnosis, so we could see an increase if that changes, even if your underlying theory is correct. Still upvoted, tho.
0OneLonePrediction11y
Thank you. Yes, this is possible, but the increase in those groups would end up exactly matching the decrease in adult rates from learning coping skills so well as to be undiagnosable and that seems unlikely to me. Why shouldn't one be vastly more or less? Anyway, I'm going to make the article now. If you want to continue this, we can do it there.

I saw this site on evand's computer one day, so of course then had to look it up for myself. In my free time, I pester him with LW-y questions.

By way of background, I graduated from a trying-to-be-progressive-but-sort-of-hung-up-on-orthodoxy quasi-Protestant seminary in spring 2010. Primary discernible effects of this schooling (i.e., I would assign these a high probability of relevance on LW) include:

  • deeply suspicious of pretty much everything

  • a predisposition to enter a Hulk-smash rage at the faintest whiff of systematic injustice or oppression

  • high value on beauty, imagination*, and inclusivity

* Part of my motivation to involve myself in rationalism is a hope that I can learn ways to imagine better (more usefully, maybe.)

I like learning more about how brains work (/don't work). Also about communities. Also about things like why people say and do what they say and do, both in terms of conditioning/unconscious motivation and conscious decision. And and and. I will start keeping track on a wiki page perhaps.

I cherish ambitions of being able to contribute to a discussion one day! (If anyone has any ideas/relevant information about getting over not wanting to look stupid, please do share ...)

Hi!

2[anonymous]11y
Welcome! You sound like just our type. Glad to have you with us. Lurk, read the archives, brazenly post things you are quite sure of. Remember that downvotes don't mean we hate you. I dunno. I only get the fear after I post so it's not a problem for me.
1Epiphany11y
Don't worry, you can't possibly look worse than I did. I wanted to be around people who can point out my flaws and argue with me effectively and tell me things I didn't know. I wanted to be held to higher standards, to actually have to work hard to earn respect. I'm not getting that in other areas of my life. Here, I get it. (: I am so grateful that I found this. People will challenge you and make you work, and find your flaws, but that's a blessing. Embrace it.

Hello.

I was raised by a rationalist economist. At some point I got the idea that I wanted to be a statistical outlier, and also that irrationality was the outlier. After starting to pay attention to current events and polls, I'm now pretty sure that the second premise is incorrect.

I still have many thought patterns from that period that I find difficult to overcome. I try to counter them in the more important decisions by assigning WAG numerical values and working through equations to find a weighted output. I read more non-fiction than fiction now, and I am working with a mental health professional to overcome some of those patterns. I suppose I consider myself to have a good rationalist grounding while being used to completely ignoring it in my everyday life.

I found Less Wrong through FreethoughtBlogs and "Harry Potter and the Methods of Rationalism." I added it to my feed reader and have been forcing my economist to help me work though some of the more science-of-choice oriented posts.

2A1987dM12y
??? The only expansion of that I can find with Google (Wifes And Girlfriends [of footballers]) doesn't seem too relevant.

Wild Ass Guess.

4DaFranker12y
Was that just meta, or did you already know it? In what fields would the saying be more common, out of curiosity?
7evand12y
It's reasonably common among engineers in my experience. Along with SWAG -- scientific wild-assed guessed, intended to denote something that has minimal support -- an estimation that is the output of combining WAGs and actual data, for example.
3Davidmanheim12y
He may not have known it, but it's used. I worked in Catastrophe Risk modeling, and it was a term that applied to what our clients and competitors did; not ourselves, we had rigorous methodologies that were not discussed because they were "trade secrets," or as I came to understand, what is referred to below as SWAG. I have heard engineers use it as well..

Hi. 18 years old. Typical demographics. 26.5-month lurker and well-read of the Sequences. Highly motivated/ambitious procrastinator/perfectionist with task-completion problems and analysis paralysis that has caused me to put off this comment for a long time. Quite non-optimal to do so, but... must fight that nasty sunk cost of time and stop being intimidated and fearing criticism. Brevity to assure it is completed - small steps on a longer journey. Hopefully writing this is enough of an anchor. Will write more in future time of course.

Finally. It is written. So many choices... so many thoughts, ideas, plans to express... No! It is done! Another time you silly brain! We must choose futures! We will improve, brain, I promise.

I look forward to at last becoming an active member of this community, and LEVELING UP! Tsuyoku naritai!

I’m Taylor Smith. I’ve been lurking since early 2011. I recently finished a bachelor’s in philosophy but got sort of fed up with it near the end. Discovering the article on belief in belief is what first hooked me on LessWrong, as I’d already had to independently invent this idea to explain a lot of the silly things people around me seemed to be espousing without it actually affecting their behavior. I then devoured the Sequences. Finding LessWrong was like finding all the students and teachers I had hoped to have in the course of a philosophy degree, all in one place. It was like a light switching on. And it made me realize how little I’d actually learned thus far. I’m so grateful for this place.

Now I’m an artist – a writer and a musician.

A frequently-confirmed observation of mine is that art – be it a great sci-fi novel, a protest song, an anti-war film – works as a hack to help to change people’s minds who are resistant or unaccustomed to pure rational argument. This is true especially of ethical issues; works which go for the emotional gut-punch somehow make people change their minds. (I think there are a lot of overlapping reasons for this phenomenon, but one certainly is that... (read more)

0John_Maxwell11y
I think art that spreads the "politics is the mind-killer" meme (which actually seems to be fairly novel outside LW: 1 2) could be a good use of art. Some existential risks, like nuclear weapons, seem likely to be controlled by world governments. The other day it occurred to me that world leaders are people too and are likely susceptible to the same biases as typical folk. If world leaders were less "Go us!" and more "Go humanity!", that could be Really Good. Welcome to LW, by the way!

Hello all, My name is Benjamin Martens, a 19-year-old student from Newcastle, Australia. Michael Anissimov, director of Humanity+, added me to the Less Wrong Facebook group. I don’t know his reasons for adding me, but regardless I am glad that he did.

My interest in rational thinking, and in conscious thinking in general, stems, first, from the consequences of my apostasy from Christianity, which is my family’s faith; second, from my combative approach to my major depression, which I have (mostly) successfully beaten into submission through an analysis of some of the possible states of the mind and of the world— Less Wrong and the study of cognitive biases will, I hope, further aid me in revealing my depressive worldview as groundless; or, if not as groundless, then at least as something which is not by nature aberrant and which is, to some degree, justified; third, and in connection to my vegan lifestyle, I aim to understand the psychology which might lead a person to cause another being to suffer; and last, and in connection to all aforementioned, it is my hope that an understanding of cognitive biases will allow not merely myself to edge nearer to the true state of things, but al... (read more)

[-][anonymous]11y120

I'm Nancy Hua. I was MIT 2007 and worked in NYC and Chicago in automated trading for 5 years after graduating with BS's in Math with CS (18C) and in Writing (21W).

Currently I am working on a startup in the technology space. We have funding and I am considering hiring someone.

I started reading Eliezer's posts on Overcoming Bias. In 2011, I met Eliezer, Robin Hanson, and a bunch of the NYC Lesswrongers. After years of passive consumption, very recently I started posting on lesswrong after meeting some lesswrongers at the 2012 Singularity Summit and events leading up to it, and after reading HPMOR and wanting to talk about it. I tried getting my normal friends to read it but found that making new friends who have already read it is more efficient.

Many of the writings regarding overcoming our biases and asking more questions appeal to me because I see many places where we could make better decisions. It's amazing how far we've come without being all that intelligent or deliberate, but I wonder how much more slack we have before our bad decisions prevent us from reaching the stars. I want to make more optimal decisions in my own life because I need every edge I can get to achieve some of my goals! Plus I believe understanding and accepting reality is important to our success, as individuals and as a species.

Poll: how old are you?

Newcomers only, please.

How polls work: the comments to this post are the possible answers. Upvote the one that describes your age. Then downvote the "Karma sink" comment (if you don't see it, it is the collapsed one), so that I don't get undeserved karma. Do not make comments to this post, as it would make the poll options hard to find; use the "Discussion" comment instead.

5AllanGering12y
30-44
3AllanGering12y
<18
2AllanGering12y
45 or older
0AllanGering12y
Discussion
2VNKKET12y
Upvoted for explaining how polls work.
-33AllanGering12y
[-][anonymous]11y110

Hi. I discovered LessWrong recently, but not that recently. I enjoy Yudkowsky's writings and the discussions here. I hope to contribute something useful to LessWrong, someday, but as of right now my insights are a few levels below those of others in this community. I plan on regularly visiting the LessWrong Study Hall.

Also, is it "LessWrong" or "Less Wrong"?

7Kawoomba11y
You'll fit in great.
3TheOtherDave11y
I endorse "Less Wrong" as a standalone phrase but "LessWrong" as an affixed phrase (e.g., "LessWrongian").
1A1987dM11y
Good question... :-)
1A1987dM11y
The front page and the About page consistently use the one with the space... except in the logo. Therefore I'm going to conclude that the change in typeface colour in the logo counts as a space and the ‘official’ name is the spaced one.
2[anonymous]11y
I went through the same reasoning pattern as you right before reading this comment. So I think I'll stick with "Less Wrong", for the time being.
0beoShaffer11y
Either is acceptable, though I'd say "Less Wrong" is slightly better.

I am Pinyaka. I've been lurking a bit around this site for several months. I don't remember how I found it (probably a linked comment from Reddit), but stuck around for the main sequences. I've worked my way through two of them thanks to the epub compilations and am currently struggling to figure out how to prioritize and better put into practice the things that I learn from the site and related readings.

I hope to have some positive social interactions with the people here. I find that I become fairly unhappy without some kind of regular socialization in a largish group, but it's difficult to find groups whose core values are similar to mine. In fact, after leaving a quasi-religious group last year it occurred to me that I've always just fallen in with whatever group was most convenient and not too immediately repellant. This marks the first time I've tried to think about what I value and then seek out a group of like minded individuals.

I also hope to find a consistent stream of ideas for improving myself that are backed by reason and science. I recognize that removing (or at least learning to account for) my own biases will help me build a more accurate picture of the universe that I live in and how I function within that framework. Along with that, I hope to develop the ability to formulate and pursue goals to maximize my enjoyment of life (I've been reading a bunch of lukeprogs anti-akrasia posts recently, so following through on goals is on my mind currently).

I am excited to be here.

3beoShaffer11y
Hi Pinyaka! Semi-seriously, have you considered moving?
0pinyaka11y
I'm sort of averse to moving at the moment, since I'm in the middle of getting my doctorate, but I'll likely have to move once I finish that. Do you have specific suggestions? I have always picked where I live based on employment availability and how much I like the city from preliminary visits.
0beoShaffer11y
In that case its going to strongly depend on your field, and if your going into academia specifically you likely won't have much of a choice. That said NY and the Bay Area are both good places for finding rationality support.
2Nisan11y
Welcome! You might enjoy it if you show up to a meetup as well.
0pinyaka11y
Thank you. I haven't seen one in Iowa yet, but I do keep an eye out for them.
0John_Maxwell11y
Welcome!

I'm Shai Horowitz. I'm currently a duel physics and mathematics major at Rutgers university. I first learned of the concept of "Bayesian" or "rationality" through HPMOR and from there i took it upon myself to read the Overcoming Bias post which has been an extremely long endeavor of which I have almost but not yet accomplished. Through conversation with others in my dorm at Rutgers I have realized simply how much this learning has done to my thought process and it allowed me to hone in on my own thoughts that i could see were still biased and go about fixing them. Through this same reasoning it became apparent to me that it would be largely beneficial to become an active part in the lesswrong community to sharpen my own skills as a rationalist while helping others along the way. I embrace rationality for the very specific reason that I wish to be a Physicists and realize that in trying to do so i could (as Eliezer puts hit) "shoot off my own foot" while doing things that conventional science allows. In the process of learning this I did stall out for months at a time and even became depressed for a while as I was stabbing my weakest points with the met... (read more)

6Qiaochu_Yuan11y
Welcome! I am really curious what you mean by
0shaih11y
My thoughts on its implications are along the lines of even if cryogenics works or the human race finds some other way of indefinitely increasing the length of the human life span, the second law of thermodynamics would eventually force this prolonged life to be unsustainable. That combined with the adjusting of my probability estimates of an afterlife made me have to face the unthinkable fact that there will be a day in which i cease to exist regardless of what i do and i am helpless to stop it. while i was getting over the shock of this i would have sleepless night which turned into days that i was to tired to be coherent which turned into missing classes which turned into missed grades. In summation I allowed a truth which would not come to pass for an unthinkable amount of time to change how i acted in the present in a way in which it did not warrant (being depressed or happy or any action now would not change that future).

Hi! I was wondering where to start on this website. I started reading the sequence "How to actually change your mind", but there's a lot of lingo and stuff I still don't understand. Is there a sequence here that's like, Rationality for Beginners, or something? Thanks.

3Kindly11y
Probably the best thing you can do, for yourself and for others, is to post comments on the posts you've read, asking questions where you don't understand something. The sequences ought to be as easy to understand as possible, but the reality may not always approach the ideal. But if the jargon is the problem, the LW wiki has a dictionary
2beoShaffer11y
I found the order presented in the wiki's guide to the sequences to be quite helpful.
0Dorikka11y
This may be a decent starting post.
0TimS11y
Welcome. As intro pieces, I really like Making Beliefs Pay Rent and Belief in Belief. The rest of the Mysterious Answers sequence is attempts to illuminate or elaborate on the points made in those two essays. I was less impressed with "A Human's Guide to Words," but that might be because my legal training forced me think about those issues far before I ever wandered here. As a brief heuristic, if the use-mention distinction seems really insightful to you, try it out. If you've already thought about similar issues, you could pass on it. I think the other Sequences are far less interestingly novel, but some of that is my (rudimentary but still above average for here) background in philosophy. And some of it is that I don't care about some of the topics that are central to the discussion in this community. As always with advice like this, take what I say with a substantial grain of salt. Feel free to look at our wiki page on the Sequences to see all of what's out there.

Hi, my name is Briony Keir, I'm from the UK. I stumbled on this site after getting into an argument with someone on the internet and wondering why they ended up failing to refute my arguments and instead resorted to insults. I've had a read-around before posting and it's great to see an environment where rational thought is promoted and valued; I have a form of autism called Asperger Syndrome which, among many things, allows me to rely on rationality and logic more than other people seem to be able to - I too often get told I'm 'too analytical' and I 'shouldn't poke holes in other peoples' beliefs' when, the way I see it, any belief is there to be challenged and, indeed, having one's beliefs challenged can only make them stronger (or serve as an indicator that one should find a more sensible viewpoint). I'm really looking forward to reading what people have to say; my environment (both educational and domestic) has so far served more to enforce a 'we know better than you do so stop talking back' rule rather than one which allows for disagreement and resolution on a logical basis, and so this has led to me feeling both frustrated and unchallenged intellectually for quite some time. I hope I prove worthy of debate over the coming weeks and months :)

1kodos9611y
This is not at all unusual here at LessWrong... I can't seem to find a link, but I seem to recall that a fairly large portion of LessWrong-ers (at least relative to the general population) have Aspergers (or at least are somewhat Asperger-ish), myself included. I'm not entirely sure though that I agree with the statement that Aspergers is "a form of autism"... I realize that that has been the general consensus for a while now, but I've read some articles (again, can't find a link at the moment, sorry) suggesting that Aspergers is not actually related to Autism at all... personally, my feeling on the matter is that "Aspergers" isn't an actual "disease" per se, but rather just a cluster of personality traits that happen to be considered socially unacceptable by modern mainstream culture, and have therefore been arbitrarily designated as a "disease". In any case, welcome to LessWrong - I look forward to your contributions in the future!
2anansi13311y
If anything, I'd be tempted to say that autism is a more pronounced degree of asperger's. I certainly catch myself in the spectrum that includes ADD as well. The whole idea of neurodiversity is kind of exciting, actually. If there can be more than one way to appropriately interact with society, everyone gets richer.
0kodos9611y
That seems to me to be basically equivalent to saying that aspergers is a lesser form of autism. Again, sorry I can't find the links at the moment, but I recall reading several articles suggesting that the two might actually not be related at all, neurologically. I agree. Unfortunately, modern culture and institutions (like the public education system for one notable example) don't seem to be set up based on this premise.

Hello everyone,

I found Less Wrong through "Harry Potter and the Methods of Rationality" like many others. I started reading more of Eliezer Yudkowsky's work a few months ago and was completely floored. I now recommend his writing to other people at the slightest provocation, which is new for me. Like others, I'm a bit scared by how thoroughly I agree with almost everything he says, and I make a conscious effort not to agree with things just because he's said them. I decided to go ahead and join in hopes that it would motivate me to start doing more active thinking of my own.

[-][anonymous]11y110

Hello rationalists-in-training of the internet. My name is Joseph Gnehm, I am 15 and I live in Montreal. Discovering LessWrong had a profound effect on me, shedding light on the way I study thought processes and helping me with a more rational approach.

[This comment is no longer endorsed by its author]Reply

I'm a 20-year-old physics student from Finland whose hobbies include tabletop roleplaying games and Natalie Reed-Zinnia Jones-style intersection of rationality and social justice.

I've been sporadically lurking on LessWrong for the last 2-3 years and have read most of the sequences. My primary goal is to contribute useful research to either SI or FHI or failing that, a significant part of my income. I've contacted the X-risks Reduction Career Network as well.

I consider this an achievable goal as my general intelligence is extremely high and I have won a national level mathematics competition 7 years ago despite receiving effectively no training in a small backwards town. With dedication and training I believe I could reach the level of the greats.

However, my biggest challenge currently is Getting Things Done; apart from fun distractions, committing any significant effort to something is nigh impossible. This could probably be caused by clinical depression (without the mood effects) and I'm currently on venlafaxine as an attempt to improve my capability to actually do something useful but so far (about 3 months) it hasn't had the desired effect. Assistance/advice would be appreciated.

Hi everyone! Another longtime lurker here. I found LW through Yvain's blog (Emily and Control FTW!). I'm not really into cryonics or FAI, but the sequences are awesome, and I enjoy the occasional instrumental rationality post. I decided to become slightly more active here, and this thread seemed like a good place to start, even if a bit old.

Hi.

My name is Roberto and I'm a Brazilian physicist working in the UK. Even working in an academic environment, that obviously do not guarantee an environment where rational/unbiased/critical discussions can happen. Science production in universities not always are carried out by thinking critically about a subject as many papers can be purely technical in their nature. Also, free thinking is as regulated in academia as it is everywhere else in many aspects.

That said, I have been reading and browsing Less Wrong for some time and think that this can indeed be done here. In addition, given later developments all around the world in many aspects and how people react to them, I felt the urge to discuss them in a way which is not censored, specially by the other persons in the discussion. It promises to be relaxing anyway.

I'm sure I'm gonna have a nice time.

0Risto_Saarelma12y
Do you get to hear about the Richard Feynman story often when you introduce yourself as a Brazilian physicist?
8robertoalamino12y
It's actually the first time I read it. I would be very happy to say that the situation improved over there, but that might not be true in general. Unfortunately, the way I see it is the completely opposite. The situation became worse everywhere else. Apparently, science education all around the world is becoming more distant of what Feynman would like. Someone once told me that "Science is not about knowledge anymore, it's about production". Feynman's description of his experience seems to be all about that. I refuse to believe in that, but as the world embraces this philosophy, science education becomes less and less related to really thinking about any subject.
2Risto_Saarelma12y
At least nowadays, unlike in 1950s Brazil, Feynman's stuff is a Google search away for just about any undergraduate student. Now they just need to somehow figure out they might want to search for him...
0A1987dM12y
I've found that theoretical physicists usually give me the vibe EY describes here, but experimental physicists usually don't.
1robertoalamino12y
That's more a question of taste, and there is nothing wrong with that. I also prefer theoretical physics, although I must admit that it's very exciting to be in a lab, as long as it is not me collecting the data or fixing the equipment. My point in the sentence you quoted is that you can perfectly well carry on with some "tasks" without thinking to deeply about them, even in physics. Be it theoretical or experimental or computational. That is something I think is really missing in the whole spectrum of education, not only in science and not only in the universities.

Please add a few words about "Open Thread". Something like -- If you want to write just a simple question or one paragraph or text, don't create a new article, just add it as a comment to the latest discussion article called "Open Thread".

0AllanGering12y
In the same line of thought, it may be worth revising the following.

Hello everyone. My name is Vadim Kosoy, and you can find some LW-relevant stuff about me in my Google+ stream: http://plus.google.com/107405523347298524518/about

I am an all time geek, with knowledge / interest in math, physics, chemistry, molecular biology, computer science, software engineering, algorithm engineering and history. Some areas in which I'm comparatively more knowledgeable: quantum field theory, differential geometry, algebraic geometry, algorithm engineering (especially computer vision)

In my day job I'm a technical + product manager of a small software group in Mantis Vision (http://www.mantis-vision.com/) a company developing 3D video cameras. My previous job was in VisionMap (http://www.visionmap.com/) which develops airborne photography / mapping systems, where I led a team of software and algorithm engineers.

I knew about Eliezer Yudkowsky and his friendly AI thesis (which I don't fully accept) for some time, but discovered this community only relatively recently. For me this community is interesting because of several reasons. One reason is that many discussions are related to the topics of transhumanism / technological singularity / artificial intelligence which... (read more)

1lukeprog11y
Welcome! You should probably join the MAGIC list. Orseau and others hang out there, and Orseau will probably comment on your two posts if you ask for feedback on that list. Also, if you ever visit California then you should visit MIRI and do some math with us.
1Kawoomba11y
Welcome! We're all 29.9 years old, here. I look forward to your comments, hopefully you'll find the time for that post on your Orseau-Ring variant. Regarding your redefinition of god, allow me just a small comment: Calling an unknowable reason "god" - without believing in such a reason's personhood, or volition, or having a mind - invites a lot of unneeded baggage and historical connotations that muddle the discussion, and your self-identification, because what you apparently mean by that term is so different from the usual definitions of "god" that you could just as well call yourself a spiritual atheist (or related).

Welcome! We're all 29.9 years old, here.

Speak for yourself, youngster ! Why, back in my day, we didn't have these "internets" you whippersnappers are always going on about, what with the cats and the memes and the facetubes and the whatnot. We had to make our own networks, by hand, out of floppies and acoustic modems, and we liked it . Why, there's nothing like an invigorating morning hike with a box of 640K floppies (formatted to 800K) in your backpack, uphill in the snow both ways. Builds character, it does. Mumble mumble mumble get off my lawn !

0Squark11y
Maybe from a consequentialist point-of-view, it's best to use the word "God" when arguing my philosophy with theists and use some other word when arguing my philosophy with atheists :) I'm thinking of "The Source". However there is a closely related construct which has a sort-of personhood. I named it "The Asymptote": I think that the universe (in the broadest possible sense of the word) contains a sequence of intelligences of unbounded increasing power and "The Asymptote" is a formal limit of this sequence. Loosely speaking, "The Asymptote" is just any intelligence vastly more powerful than our own. This idea comes from the observation that the known history of the universe can be regarded as a process of forming more and more elaborate forms of existence (cosmological structure formation -> geological structure formation -> biological evolution -> sentient life -> evolution of civilization) and therefore my guess is that there is something about "The Source" which guarantees a indefinite process of this kind. Some sort of a fundamental Law of Evolution which should be complementary, in a way, to the Second Law of Thermodynamics.
0CCC11y
I disagree that they are necessarily more elaborate. I don't think we (as humanity) fully appreciate the complexity of cosmological structures yet (and I don't think we will until we get out there and take a closer look at them; we can only see coarse features from several lightyears away). And civilisation seems less elaborate than sentience, to me.
1Squark11y
Well, civilization is a superstructure of sentience an is more elaborate in this sense (i.e. sentience + civilization is more elaborate than "wild" sentience)
1CCC11y
I take your point. However, I can turn it about and point out that cosmological structures (a category that includes the planet Earth) must by the same token be more elaborate than geological structures.
0Squark11y
Sure. Perhaps I chose careless wording but when I said "cosmological structure formation -> geological structure formation" my intent was the process thereby a universe initially filled with homogeneous gas develops inhomogeneities which condense to form galaxies, stars and planets which undergo further processes (galaxy collisions, supernova explosions, collisions within stellar systems, geologic / atmospheric processes within planets) that produce more and more complex structure over time.
0CCC11y
I see. Doesn't that whole chain require the entropy of the universe to be negative? Or am I missing something?
0Squark11y
You mean that this process has the appearance of decreasing entropy? In truth it doesn't. For example gravitational collapse (the basic mechanism of galaxy and star formation) decreases entropy by reducing the spatial spread of matter but increases entropy by heating matter up. Thus we end up with a total entropy gain. On cosmic scale, I think the process is exploiting a sort-of temperature difference between gravity and matter, namely that initially the temperature of matter was much higher than the Unruh temperature associated with the cosmological constant. Thus even though the initial state had little structure it was very off-equilibrium and thus very low entropy compared to the final equilibrium it will reach.
0CCC11y
Huh. I don't think that I know enough physics to argue this point any further.
0Bugmaster11y
I strongly doubt the existence of any truly unbounded entity. Even a self-modifying transhuman AI would eventually run out of atoms to convert into computronium, and out of energy to power itself. Even if our Universe was infinite, the AI would be limited by the speed of light. Wait, so is it bounded or isn't it ? I'm not sure what you mean. There are plenty of planets where biological evolution had not happened, and most likely never will -- take Mercury, for example, or Pluto (yes yes I know it's not technically a planet). As far as we can tell, most of not all exoplanets we have detected so far are lifeless. What leads you to believe that biological evolution is inevitable ?
2Squark11y
In an infinite universe, the speed-of-light limit is not a problem. Surely it limits the speed of computing but any computation can be performed eventually. Of course you might argue that our universe it asymptotically de Sitter. This is true, but it also probably metastable and can collapse into a universe with other properties. In http://arxiv.org/abs/1105.3796 the authors present the following line of reasoning: there must be a way to perform an infinite sequence of measurements since otherwise the probabilities of quantum mechanics would be meaningless. In a similar vein I speculate it must be possible to perform an infinite number of computation (or even all possible computations). The authors then go on to explore cosmological explanation of how that might be feasible. The sequence is unbounded in the sense that any possible intelligence is eventually superseded. The Asymptote is something akin to infinity. The Asymptote is "like an intelligence but not quite" in the same way infinity is "like a number but not quite" Good point. Indeed it seems that life formation is a rare event. So I'm not sure whether there really is a "Law of Evolution" or we're just seeing the anthropic principle at work. It would be interesting to understand how to distinguish these scenarios
2wedrifid11y
Does this hold in a universe that is also expanding (like ours)? Such a scenario makes the 'infinite' property largely moot given that any point within has an 'observable universe' that is not infinite. That would seem to rule out computations of anything more complicated than what can be represented within the Hubble volume.
0Squark11y
Yes, this was exactly my point regarding the universe being asymptotically de Sitter. The problem is that the universe is not merely expanding, it's expanding with acceleration. But there are possible solutions to this like escaping to an asymptotic region with a non-positive cosmological constant via false vacuum collapse.
0Bugmaster11y
wedrifid already replied better than I could; but I'd still like to add that "eventually" is a long time. For example, if the algorithm that you are computing is NP-complete, then you won't be able to grow your hardware quickly enough to make any practical difference. In addition, if our universe is not eternal (which it most likely is not), then it makes no sense to talk about an "infinite series of computations". Sorry, but I literally have no idea what this means. I don't think that infinity is "like a number but not quite" at all, so the analogy doesn't work for me. Well, so far, we have observed one instance of "evolution", and thousands of instances of "no evolution". I'd say the evidence is against the "Law of Evolution" so far...
2Squark11y
For algorithms with exponential complexity, you will have to wait for exponential time, yes. But eternity is enough time for everything. I think the universe is eternal. Even an asymptotically de Sitter region is eternal (but useless since it reaches thermodynamic equilibrium), however the universe contains other asymptotic regions. See http://arxiv.org/abs/1105.3796 A more formal definition is given in my comment http://lesswrong.com/lw/do9/welcome_to_less_wrong_july_2012/8kt7 . Less formally, infinity is "like a number but not quite" because many predicates into which a number can be meaningfully plugged in, also work for infinity. For example: infinity > 5 infinity + 7 = infinity infinity + infinity = infinity infinity * 2 = infinity However not all such expressions make sense: infinity - infinity = ? infinity * 0 = ? Formally, adding infinity to the field of real numbers doesn't yield a field (or even a ring). There is clearly at least one Great Filter somewhere between life creation (probably there is one exactly there) and appearance of civilization with moderately supermodern technology: it follows from Fermi's paradox. However it feels as though there is a small number of such Great Filters with nearly inevitable evolution between them. The real question is what is the expected number of instances of passing these Filters within the volume of a cosmological horizon. If this number is greater than 1 then the universe is more pro-evolution than what is anticipated from the anthropic principle alone. Fermi's paradox puts an upper bound on this number, but I think this bound is much greater than 1
0shminux11y
Why postulate that such a limit exists?
-2Squark11y
To really explain what I mean by the Asymptote, I need to explain another construct which I call "the Hypermind" ( Kawoomba's commented motivated me to invest in the terminology :) ). What is identity? What makes you today the same person like you yesterday? My conviction is that the essential relationship between the two is that the "you of today" shares the memories of "you of yesterday" and fully understands them. In a similar manner, if a hypothetical superintelligence Omega would learn all of your memories and understand them (you) on the same level you understand yourself, Omega should be deemed a continuation of you, i.e. it assimilated your identity into its own. Thus in the space of "moments of consciousness" in the universe we have a partial order where A < B means "B is a continuation of A" i.e. "B shares A's memories and understands them". The Hypermind hypothesis is that for any A and B in this space there is C s.t. C > A and C > B. This seems to me a likely hypothesis if you take into account that the Omega in the example above doesn't have to exist in your physical vicinity but may exist anywhere in the (multi/)universe and have a simulation of you running on its laptop. The Asymptote is then a formal limit of the Hypermind. That is, the semantics of "The Asymptote has property P" is "For any A there is B > A s.t. for any C > B, C has property P". It is then an interesting problem to find non-trivial properties of the Asymptote. In particular, I suspect (without strong evidence yet) that the opposite of the Orthogonality Thesis is true, namely that the Asymptote has a well-defined preference / utility function
3shminux11y
This seems like a rather simplistic view, see counter-examples below. "conviction" might not be a great term, maybe what you mean is a careful conclusion based on something. except that we forget most of them, and that our memories of the same event change in time, and often are completely fictional. Not sure what you mean by understanding here, feel free to define it better. For example, we often "understand" our memories differently at different times in our lives. So, if you forgot what you had for breakfast the other day, you today are no longer a continuation of you from yesterday? That's a rather non-standard definition. If anything, it's close to monotonicity than to accumulation. If you mean the limit point, then you ought to define what you mean by a neighborhood. To sum up, your notion of Asymptote needs a lot more fleshing out before it starts making sense.
-2Squark11y
Good point. The description I gave so far is just a first approximation. In truth, memory is far from ideal. However if we assign weight to memories by their potential impact on our thinking and decision making then I think we would get that most of the memories are preserved, at least on short time scales. So, from my point of view, the "you of today" is only a partial continuation of the "you of yesterday". However it doesn't essentially changing the construction of the Hypermind. It is possible to refine the hypothesis by stating that for every two "pieces of knowledge" a and b, there exists a "moment of consciousness" C s.t. C contains a and b. Actually I overcomplicated the definition. The definition should read "Exists A s.t. for any B > A, B has property P". The neighbourhoods are sets of the form {B | B > A}. This form of the definition implies the previous form using the assumption that for any A, B there is C > A, B.
0shminux11y
Hmm, it seems like your definition of Asymptote is nearly that of a limit ordinal.

Hello,

I'm Ben. I'm here mainly because I'm interested in effective altruism. I think that tracing through the consequences of one's actions is a complex task and I'm interested in setting out some ideas here in the hope that people can improve my reasoning. For example, I've a post on whether ethical investment is effective, which I'd like to put up once I've got a couple of points of karma.

I studied philosophy and theology, and worked for a while in finance. Now, I'm trying to work out how to increase the positive impact I have, which obviously demands answers about both what 'positive impact' means, and what the consequences are of the choices I make. I think these are far from simple to work out; I hope just to establish a few points with which I'm satisfied enough. I think that exposing ideas and arguments to thoughtful people who might want to criticise or expand them could help me a lot. And this seems a good place for doing that!

Hi, I'm Alex.

Every once in a while I come to LessWrong because I want to read more interesting things and have more interesting discussions on the Internet. I've found it a lot easier to spend time on Reddit (having removed all the drivel) and dredging through Quora to find actually insightful content (seriously, do they have any sort of actual organization system for me to find reading material?) in the past. LessWrong's discussions have seemed slightly inaccessible, so maybe posting an introduction like I'm supposed to will set in motion my figuring out how this community works.

I'm interested in a lot of things here, but especially physics and mathematics. I would use the word "metaphysics" but it's been appropriated for a lot of things that aren't actually meta-physics like I mean. Maybe I want "meta-mathematics"? Anyway, I'm really keen on the theory behind physical laws and on attempts at reformulating math and physics into more lucid and intuitive systems. Some of my reading material (I won't say research, but ... maybe I should say research) recently has been on geometric algebra, re-axiomizing set theory, foundations and interpretations of quantum mechan... (read more)

5[anonymous]11y
Be very careful thinking you are done. I was in pretty much exactly the same position as you about a year ago. ("yep, I'm pretty rational. Lol @ god; I wonder what it's like to have delusional beliefs"). After a year and a half here, having read pretty much everything in the sequences and most of the other archives, running a meetup, etc, I now know that I suck at rationality. You will find that you are nowhere near the limits, or even the middle, of possible human rationality. Further, I now know what it's like to have delusional beliefs that are so ingrained you don't even recognize them as beliefs, because I had some big ones. I probably have more. There not easy to spot from the inside. On the subject of atheism... I used to be an atheist, too. The rabbit hole you've fallen into here is deep. The Seattle guys are pretty cool, from those I've met. Go hang out with them.
7Kawoomba11y
Don't be mysterious, Morpheus, please elaborate.
2shev11y
Okay, sure. Rather I mean: I feel like I'm passed the introductory material. Like I'm coming in as a sophomore, say. But - I could be totally wrong! We'll see. I've definitely got counter-rational behaviors ingrained; I'm constantly fighting my brain. And, if we're pedantic about things pretty similar to atheism, I might not be an atheist. I'm not up to speed on all the terms. What do you call: I was calling that atheism.
0[anonymous]11y
In that sense, then, I'm an atheist. My test was whether my gods-related beliefs would get me flamed on r/atheism. I don't think my beliefs would pass the ideological turing test for atheism. I used to think the god hypothesis was not just wrong, but incoherent. How could there be a being above and outside physics? How could god break the laws of physics? Of course now I take the simulation argument much more seriously, and even superintelligences within the universe can probably do pretty neat things. I still think non-reductionism is incoherent; "a level above ours" makes sense, "supernatural" does not. This isn't really a major update, though. I'm just not going to refer to myself as an atheist any more, because my beliefs permit a lot more.
0shminux11y
Seems like agnosticism to me, or atheism in a broader sense. The narrow atheism is a belief in zero gods.
4shminux11y
From your blog: This is amazing, yet seems so obvious in retrospect. So many of us have turned into blue-minimizing robots without realizing it. Hopefully breaking the reward feedback loop with your extension would force people to try to examine their true reasons for clicking.
1shev11y
I was pretty pleased with myself for discovering that. It - sorta works. I still find myself going to Reddit, but so far it's still "feeling" less addictive (which is really hard to quantify or describe). Now I'm finding myself just clicking to websites more looking for something, rather than specifically clicking links. I've been sleeping badly lately, though, and I find that my brain is a lot more vulnerable to my Internet addiction when I haven't slept well - so it's not a good comparison to my norm. Incidentally, if anyone wanted me to I could certainly make the extension work on other browsers. It's the simplest thing ever, it just injects 7 clauses of CSS into Reddit pages. I thought about making it mess with other websites I used (hackernews, mostly) but I decided they weren't as much of a problem and it was better to keep it single-purpose for now.
2itaibn011y
Now I'm tempted to spread a meme. Have you heard Martin-Loef type theory? In my opinion, it's a much better foundation of mathematics than ZFC.
0Nisan11y
Welcome. There are some e-reader format and pdf versions of the Sequences that may be easier to navigate.

Hello, newbie here. I'm intrigued by the premise of this forum.

About me: I think a lot- mostly by myself. That's trained me in some really lazy habits that I am looking to change now.

In the last few weeks, I noticed what I think are some elemental breakdowns in human politics. When things go bad between people, I think it can be attributed to one of three causes: immaturity, addiction, or insanity. I would love to discuss this further, hoping someone's interested.

I wasn't going to mention theism, but it's here in the main post, and suddenly I'm interested: I trend toward the athiestic- I'm really unimpressed with my grandmother's deity, and "supernatural" doesn't seem a useful or interesting category of phenomena. But I like being agnostic more than atheist, just on a few tiny little wiggle-words that seem powerfully interesting to me, and I notice that other people seem to find survival value in it. So that's probably something I will want to talk about.

Many of my more intellectual friends and neighbors can seem like bullies a lot of the time. So I like the word "rationality" in the title of this place, much more than I like "science" or "logic&... (read more)

1simplicio11y
Yes, I know the feeling. Welcome out of the echo chamber! Do you mean that it's literally the words you find interesting? Which ones?
0anansi13311y
That's not actually what I meant, but the challenge seems interesting. lemme see... Reciprocity? (I'm looking for a word to describe what happens when Islam holds Jesus up as a prophet worth listening to, but Christians afford no such courtesy to Muhammad.) Faith (Firefly's Book asks Mal, "when I ask you to have faith, why do you think I'm talking about God?") Ethics vs Morals (few people I know seem to recognize a difference, let alone agree on it) Moral Class (If we were to encounter a powerful extraterrestrial, how would we know they weren't God? How would they understand the question if we asked them?) I guess the words weren't so small after all...

Hello, I'm Ben Kidwell. I'm a middle-aged classical pianist and lifelong student of science, philosophy, and rational thought. I've been reading posts here for years and I'm excited to join the discussion. I'm somewhat skeptical of some things that are part of the conventional wisdom around here, but even when I think the proposed answers are wrong - the questions are right. The topics that are discussed here are the topics that I find interesting and significant.

I am only formally and professionally trained in music, but I have tried to self-study physics, math, computer science, and philosophy in a focused way. I confess that I do have one serious weakness as a rationalist, which is that I can read and understand a lot of math symbology, but I can't actually DO math past the level of simple calculus with a few exceptions. (Some computer programming work with algorithms has helped with a few things.) It's frustrating because higher math is The Key that unlocks a lot of deep understanding of the universe.

I have a particular interest in entropy, information theory, cosmology, and their relation to the human experience of temporality. I think the discovery that information-theoretic... (read more)

[-][anonymous]11y100

I'm Rev. PhD in mathematics, disabled shut-in crank. I spend a lot of time arguing with LW people on Twitter.

1drethelin11y
Noooooo don't get sucked in
0[anonymous]11y
I think it is unlikely.

I am a 43 year old man who loves to read, and stumbling across HPMOR was an eye opener for me, and it resonated profoundly within. My wife is not only the Queen of Critical Thinking and logic, she is also the breadwinner. Me? I raise the children( three girls), take care of the house, and function as a housewife/gourmet chef/personal trainer/massage therapist for my wife on top of being my daughters personal servant. This is largely due to my wife's towering intellect, overwhelming competence, my struggles with ADHD, and the fact that she makes huge amounts of money. Me, I just age almost supernaturally slowly(at 43, I still pass for thirty, possibly due to an obsession with fitness ), am above average handsome, passingly charming, good singing voice, and incapable of winning a logical argument, as the more stress I grow, the faster my IQ shrinks. I am taken as seriously by my wife, as Harry probably was by his father as a four year old. I am looking to change that. I am hoping that if I learn enough about less wrong, I just might learn how to put all the books I compulsively read to good use, and maybe learn how to...change.

2MileyCyrus11y
I'm actually incredibly interested in your story, if you don't mind. What is like dating a woman who is smarter than you are? What do you think attracted her to you? (I would love to pair-bond with a genius woman, but most of them only want to pair-bond with other geniuses.)
0Alicorn11y
"House spouse" works as a gender neutral term, and it rhymes!
1MugaSofer11y
This is not a good thing.

I am Alexander Baruta, High-school student currently in the 11th grade (grade 12 math and biology). I originally found the site through Eliezer's blog, I am (technically) part of the school's robotics team (someone has to stop them from creating unworkable plans), undergoing Microsoft It certification, and going through all of the psychology courses in as little time as possible (Currently enrolled in a self-directed learning school) so I can get to the stuff I don't already know. My mind is fact oriented, (I can remember the weirdest things with perfect clarity after only hearing them once) but I have trouble combining that recall with my English classes, and I have trouble remembering names. I am informally studying formal logic, programming, game theory, and probability theory (don't you hate it when the curriculum changes. (I also have a unusual fondness for brackets, if you couldn't tell by now)

I also feel that any discussion about me that fails to mention my love of Sf/Fantasy should be shot dead, I caught onto reading at a very, very early age and by the time I was in 5th grade I was reading at a 12th grade comprehension level, and I was tackling Asimov, Niven, Pohl, Piers Anthony, Stephen R. Donaldson, Roger Zelazny and most good authors.

8Kawoomba11y
Lisp ith a theriouth condition, once you go full Lisp, you'll never (((((((((((((... come back)?n).
0Baruta0711y
I was laughing so hard when I saw this.
0beoShaffer11y
How do you feel about Heinlein?
2Baruta0711y
He's a decent author but I am having trouble finding anything of significance by him in Calgary
0beoShaffer11y
Too bad.

Hi,

I was introduced to LW by a friend of mine but I will admit I dismissed it fairly quickly as internet philosophy. I came out to a meetup on a recent trip to visit him and I really enjoyed the caliber of people I met there. It has given me reason to come back and be impressed by this community.

I studied Math and a little bit of Philosophy in undergrad. I'm here mostly to learn, and hopefully to meet some interesting people. I enjoy a good discussion and I especially enjoy having someone change my mind but I lose interest quickly when I realize that the other party has too much ego involved to even consider changing his or her mind.

I look forward to learning from you all!

Matt

Hello LW community. I'm a HS math teacher most interested in Geometry and Number Theory. I have long been attracted to mathematics and philosophy because they both embody the search for truth that has driven me all my life. I believe reason and logic are profoundly important both as useful tools in this search, and for their apparently unique development within our species.

Humans aren't particularly fast, or strong, or resistant to damage as compared with many other creatures on the planet, but we seem to be the only ones with a reasonably well develo... (read more)

Aaron's blog brought me here. Sad that he's no longer with us.

I have been thinking for a long time about overcoming biases, and to put them into action in life. I work as an orthopaedic surgeon in the daytime and all I see around me is an infinite amount of bias. I can't take it on unless I can understand them and apply them to my life processes!

[-][anonymous]11y90

Hey everyone, I'm sean nolan. I found less wrong from tvtropes.org, but I made sure to lurk sufficiently long before joining. I've been finding a lot of interesting stuff on lesswrong (most of which was posted by eliezer), some of which I've applied to real life (such as how procrastination vs doing something is the equivalent of defect vs cooperate in a prisoners' dilemma against your future self). I'm 99.5% certain I'm a rationalist, the other 0.5% being doubt cast upon me by noticing I've somehow attained negative karma.

Hello, I'm a physics student from Croatia, though I've attended a combined physics and computer science program (study programs here are very specific) for couple of years at a previous university that I left, though my high school specialization is in economy. I am currently working towards my bachelor's degree in physics.

I have no idea how I learned of this site, though it was probably trough some transhumanist channels (there's a lot of half-forgotten bits and pieces of information floating in my mind, so I can't be sure). Lately I've started reading th... (read more)

Hi! I am Robert Pearson: Political professional of the éminence grise variety. Catholic rationalist of the Aquinas variety. Avid chess player, pistol shooter. Admirer of the writings of Ayn Rand and Robert Heinlein. Liberal Arts BA from a small state university campus. I read Overcoming Bias occasionally some years ago, but heard of LessWrong from Leah Libresco.

My real avocation is learning how to be a smarter, better, more efficient, happier human being. Browsing the site for awhile convinced me it was a good means to those ends.

I write a column on Thursdays for Grandmaster Nigel Davies' The Chess Improver

Hey there! I'm a 19-year old Canadian girl with a love for science, science fiction, cartoons, RPGs, Wayne Rowley, learning, reading, music, humour, and a few thousand other things.

Like many I found this site via HPMOR. As a long-time fan of both science and Harry Potter, I was ultimately addicted from chapter one. It's hard to apply scientific analysis to a fictional universe while still keeping a sense of humour, and HPMOR executes this brilliantly. My only complaint ( all apologies to Mr. Yudkowsky, though I doubt he'll ever read this) are that Harry co... (read more)

Hi, I'm Jess. I've just graduated from Oxford with a masters degree in Mathematics and Philosophy. I'm trying to decide what to do next with my life, and graduate study in cognitive science is currently top of my list. What I'm really interested in is the application of research in human rationality, decision making and its limitations to wider issues in society, public policy etc.

I'm taking some time to challenge my intuition that I want to go into research, though, as I'm slightly concerned that I'm taking the most obvious option not knowing what else to... (read more)

1beoShaffer11y
I don't have a full summary on-hand, but if you just want to jumpstart your own search you might want to read Lukeprogs article on efficient scholarship and look into the keyword "debiasing".

Hi everyone,

I'm currently caught up on HPMOR, and I've read many of the sequences, so I figured it was time to introduce myself here.

I'm a 24 year old Cognitive Psychology graduate student. I was raised as a fairly conservative Christian who attempted to avoid any arguments that would seriously challenge my belief structure. When I was in undergrad, I took an intro to philosophy course which helped me realize that I needed to fully examine all of my beliefs. This helped me to move toward becoming a theistic evolutionist and finally an atheist. Now I strive... (read more)

[-][anonymous]12y90

Hi everyone,

I'm Leisha. I originally came across this site quite a while ago when I read the Explain/Worship/Ignore analogy here. I was looking for insight into my own cognitive processes; to skip the unimportant details, I ended up reading a whole lot about the concept of infinity once I realized that contemplating the idea gave me the same feeling of Worship that religion used to. It still does, to some extent, but at least I'm better-informed and can Explain the sheer scale of what I'm thinking of a little better.

I didn't return here until yesterday, wh... (read more)

1[anonymous]12y
Hmm... Explain/worship/ignore is one of the first articles I remember reading too. I wish you the warmest welcome. Make sure to at least read the Core Sequences (Map and Territory, Mysterious Answers to Mysterious Questions, Reductionism), as there is a tendency in discussion on this site to be rash against debaters who have not familiarized themselves with the basics.
0[anonymous]12y
It's a good article! Thank you for the kind welcome and for the advice. I don't intend to jump into discussion without having done the relevant reading (and acquired at least a small understanding of community norms) so hopefully I'll avoid too many mistakes. I'm working through Mysterious Answers to Mysterious Questions now, and what strikes me is how much of it I knew, in a sense, already, but never could have put forward in such a coherent and cohesive way. So far, what I've read confirms my worldview. Being wary of confirmation bias and other such fun things, I'll be curious to see how I react when I read an article here that challenges it, as I'm near-certain will happen in due course. (And even typing that makes me wonder what exactly I mean by I there in each case, but that's off-topic for this thread)

Hello,

I am Jay Swartz, no relation to Aaron. I have arrived here via the Singularity Institute and interactions with Louie Helm and Malo Bourgon. Look me up on Quora to read some of my posts and get some insight to my approach to the world. I live near Boulder, Colorado and have recently started a MeetUp; The Singularity Salon, so look me up if you're ever in the area.

I have an extensive background in high tech, roughly split between Software Development/IT and Marketing. In both disciplines I have spent innumerable hours researching human behavior and tho... (read more)

1gwern11y
I don't see anything. I assume you mean you put it in the LW edit box and then saved it as a draft? Drafts are private.

Hi I’m Bojidar (also known as Bobby). I was introduced to LW by Luke Muehlhauser’s blog “Common Sense Atheism” and I've been reading LW ever since he first started writing about it. I am a 25 year old laboratory technician (and soon to be PhD student) at a major cancer research hospital in Buffalo, NY. I've been reading LW for a while and recently I've been really wishing that Buffalo had a LW group (I've been considering starting one, but I’m a bit concerned that I don’t have much experience in running groups nor have I been very active in the online comm... (read more)

Hi, I'm Rixie, and I read this fan fic called Harry Potter and the Methods of Rationality, by lesswrong, so I decided to check out Lesswrong.com. It is totally different from what I thought it would be, but it's interesting and I like it. And right now I'm reading the post below mine, and wow, my comment sounds all shallow now . . .

1Strange711y
What did you think it would be like?
-2Rixie11y
I thought it would be more like hpmor.com, but for the authour. Little did I know . . .
1daenerys11y
Hi Rixie! Don't worry! Lots of people came to LessWrong after reading HPMoR (myself included). I know it can be intimidating here at first, but well worth the effort, I think. You might also be interested in Three Worlds Collide. It's another fiction by the same guy who wrote HPMoR, and a bunch of the Sequence posts here. If you have any questions about anything, feel free to PM me!
-1Rixie11y
And, question: What does 0 children mean? It's on the comments which were down-voted a lot and not shown.
0Slackson11y
It means it has 0 replies. The way the comments work is that the one above is the "parent" and the one's below are "children". Sometimes you see people using terminology such as "grand-parent" and "great grand-parent" to refer to posts further above.
0Nornagest11y
Means no one replied to the comment. Normally this is implicit in the number of comments nested under it, but since those aren't shown when comments are downvoted below the threshold, the site provides the number of child comments as a convenience.
0Nisan11y
If the downvoted comment had, e.g. 5 total replies to it, it would say "5 children".

I'm Rachel Haywire and I love to hate culture. I've been in "the community" for almost 2 years but just registered an account today. I need to read more of the required texts here before saying much but wanted to pop my head out from lurking. I've been having some great conversations on Twitter with a lot of the regulars here.

I organize the annual transhumanist/alt-culture event Extreme Futurist Festival (http://extremefuturistfest.info) and should have my new website up soon. I like to write, argue, and write about arguing. I've also done silly ... (read more)

Hi, my name is Wes(ley), and I'm a lurkaholic.

First, I'd like to thank this community. I think it is responsible in a large way for my transformation (perceived transformation of course) from a cynical high schooler who truly was only motivated enough to use his natural (not worked hard for) above average reasoning skills to troll his peers, to a college kid currently making large positive lifestyle changes, and dreaming of making significant positive changes in the world.

I think I have observed significant changes in my thinking patterns since reading th... (read more)

[-][anonymous]11y80

I'm new on Less Wrong and I want to solve P vs. NP.

[This comment is no longer endorsed by its author]Reply
8shminux11y
Consider partitioning into smaller steps. For example, getting a PhD in math or theoretical comp sci is a must before you can hope to tackle something like that. Well, actually before you can even evaluate whether you really want to. While you seem to be on your way there, you clearly under-appreciate how deep this problem is. Maybe consider asking for a chat with someone like Scott Aaronson.
2[anonymous]11y
Yes, I do.
8shminux11y
Do the math yourself to calculate your odds. Only one of the 7 Millennium Prize Problems have been solved so far, and that by a person widely considered a math genius since his high-school days at one of the best math-oriented schools in Russia and possibly the world at the time. And he was lucky that most of the scaffolding for the Poincaré conjecture happened to be in place already. So, your odds are pretty bad, and if you don't set a smaller sub-goal, you will likely end up burned out and disappointed. Or worse, come up with a broken proof and bitterly defend it against others "who don't understand the math as well as you do" till your dying days. It's been known to happen. Sorry to rain on your parade.
4TimS11y
My sense is that you are underestimating the number of extremely smart mathematicians who have been attacking N ? NP. And further, you are not yet in a position to accurately estimate your chances. For example, PhDs in math OR comp. sci. != PhDs in math AND comp. sci. The later is more impressive because it is much, much harder. If you find theoretical math interesting, by all means pursue it as far as you can - but I wouldn't advise a person to attend law school unless they wanted to be a lawyer. And I wouldn't advise you to enroll in a graduate mathematics program if you wouldn't be happy in that career unless you worked on P ? NP
1[anonymous]11y
I was definitely engaging in motivated cognition.
0TimS11y
If your father has a PhD in Comp.Sci., he's more likely to know than a lawyer like myself. That said, the Wikipedia article has 38 footnotes (~3/4 appear to be research papers) and 7 further readings. I estimate that at least 10x as many papers could have been cited. Conservatively, that's 300 papers. With multiple authors, that's at least 500 mathematicians who have written something relevant to P ? NP. Adjust downward because relevant != proof, adjust upward because the estimate was deliberately conservative - but how much to move in each direction is not clear.
0[anonymous]11y
The Millenium Prize would be a nice way to simultaneously fund my cryopreservation and increase my prestige. Will I get it? No.

The Millenium Prize would be a nice way to simultaneously fund my cryopreservation and increase my prestige. I clearly need a backup plan, though, and I don't have one. Will someone with a BS in mathematics and computer science be able to find a good job? Where should I look?

Sorry to put it bluntly, but this sounds incredibly naive. One cannot plan on winning the Millenium Prize any more than one can plan on winning a lottery. So, it's not an instrumentally useful approach to funding your cryo. The latter only requires a modest monthly income, something that you will in all likelihood have regardless of your job description.

As for the jobs for CS graduates, there are tons and tons of those in the industry. For example, the computer security job market is very hot and requires the best and the brightest (on both sides of the fence).

2TimS11y
In addition to what shimnux said (and which I fully endorse), I think you sell your father short. He doesn't just teach, he does research. Even if he's stopped doing that because he has tenure, he still helps peer-review papers. Even if he's at a community college and does no research or peer-review, he still probably knows what was cutting edge 10 to 15 years ago (which is much more than you or I). Regarding actual career advice, I think there are three relevant skills: * Math skill * Writing skill * Social skill Having all three at good levels is much better than having only one at excellent levels. Developing them requires lots of practice - but that's true of all skills. At college, I recommend taking as much statistics as you can tolerate. Also, take enough history so that you identify something specific taught to you as fact in high school was false/insufficiently nuanced - but not something that you currently think is false. In terms of picking majors, its probably to early to tell - if you pick a school with a strong science program, you'll figure out the rest later. Pick courses by balancing your interest with your perception of how useful the course will be (keeping in mind that most courses are useless in real life). Topic is much less important than quality of the professor. In fact, forming good relationships with specific professors is more valuable than just about any "facts" you get from particular classes - you'll have to figure out who is a good potential mentor, but a good mentor can answer the very important questions you are asking much more effectively than a bunch of random strangers on the Internet. Good luck.
4Mitchell_Porter11y
Mulmuley's geometric complexity theory is still where I would start. It's based on continuum mathematics, but extending it to boolean objects is the ultimate goal. A statement of P!=NP in GCT language can be seen as Conjecture 7.10 here. (Also downloadable from Mulmuley's homepage, see "Geometric complexity theory I".)
1EvelynM11y
Welcome! A fresh perspective on hard problems is always valuable. Getting the skills to be able to solve hard problems is even more valuable.
0beoShaffer11y
Hi, Jimmy. Welcome to less wrong. Unfortunately I don't have much advice on P vs. NP. On doing the impossible is kinda related, but not to close.
0[anonymous]11y
Do you mean this guy? That's not me. I'm the anonymous one.

Hi everyone!

I'm 19 years old and a rising sophomore at an American university. I first came across Less Wrong five months ago, when one of my friends posted the "Twelve Virtues of Rationality" on facebook. I thought little of it, but soon afterward, when reading Leah Libresco's blog on atheism (she's since converted to catholicism), I saw a reference to Less Wrong, and figured I would check it out. I've been reading the Sequences sporadically for a few months, and just got up to date on HPMOR, so I thought I would join the community and perhaps b... (read more)

1Zaine12y
Are you referring to Humean rationalists? Before Hume used empiricism to show how by mere empiricism one can never certainly identify the cause of an effect, empirical thought was lauded by Cartesian rationalists. Hume's objection to an overreliance on empiricism also (partially) helped galvanize the Romantic movement, bringing an end to the Enlightenment. Future individuals throughout history who considered themselves rationalists were of the Cartesian tradition, not 'all is uncertain' Humean rationalism (see Albert from Goethe's The Sufferings of Young Werther for one example). Those who embraced Hume's insight, though it should be mentioned that Hume himself thought that fully embracing same would be quite foolish, did not call themselves rationalists, but were divers members of myriad movements across history. Hume's point remained an open problem until it was later considered solved by Einstein's theory of special relativity. Welcome, by the way.
6A1987dM12y
What?
0Zaine12y
I may be misremembering, but if I recall correctly with Einstein's theory of special relativity it was at the time considered finally possible to accurately and precisely predict the movements of bodies in our universe. While Newton proved what laws the universe is bound by, he never figured out how these rules operated beyond what was plainly observable. When Einstein's theory of special relativity became accepted, that ball X caused the effect of ball Y's movement became mathematically provable at such a level of precision that Hume's insight - what causes the effect of ball Y's movement is not empirically discernible - became sound no longer. I admit the above is a bit vague, and perhaps dangerously so. If it doesn't clear up your question let me know, and I'll check over my notes when I get the chance.
1Vaniver12y
This is incorrect. MHD is correct about the right response to "all is uncertain," which is "right, but there are shades of uncertainty from 0 to 1, and we can measure them."
0Zaine12y
Thank you, both of you. I changed the text to reflect only STR's historical significance in regard to Hume's insight.
0iDante12y
Newton's theory of gravitation is a very close approximation to Einstein's general relativity, but it is measurably different in some cases (precession of Mercury, gravitational lensing, and more). Einstein showed that gravity can be neatly explained by the curvature of spacetime, that mass distorts the "fabric" of space (I use quotes because that's not the mathematical term for it, but it conjures a nice image that isn't too far off of reality). Objects move in straight lines along curved spacetime, but to us it looks like they go in loops around stars and such. Special relativity has to do with the relation of space and time for objects sufficiently far away from each other that gravity doesn't affect them. Causality is enforced by this theory since nothing can go faster than light, and so all spacetime intervals we run into are time-like (That's just a fancy way of saying we only see wot's in our light cone).
0A1987dM12y
(I think it was general relativity, not special relativity.) I can see where whoever said that is coming from, but I'm not sure I 100% agree. (I will elaborate on this when I have more time.)
0Zaine12y
Special relativity was formalised around ten years earlier than general relativity (around 1905), which better fits in with my mental timeline of the fin de siecle. Whoever asserted that Einstein's theory had resolved Hume's insight? or whoever said that, at the time, the educated generally considered Einstein's theory to have resolved Hume's insight? If the former, I think it was more a widespread idea that the majority of the educated shared, rather than one person's assertion. Regardless of to whom you were referring, I look forward to your elaboration!
2A1987dM12y
I can't see what special relativity would have to do with Hume. It just extended the principle of relativity, which was already introduced by Galileo, to the propagation of light at a finite speed, though with all kinds of counter-intuitive results such as the relativity of simultaneity. By itself, it still doesn't predict (say) gravitation. (It does predict conservation of energy, momentum and angular momentum if you assume space-time is homogeneous and isotropic and use Noether's theorem, but so does Galilean relativity for that matter.) On the other hand, general relativity, from a small number of very simple assumptions, predicts quite a lot of things (pretty much any non-quantum phenomenon which had observed back then except electromagnetism). Indeed Einstein said he was completely certain his theory would prove to be true before it was even tested. EDIT: you actually need more data than I remembered to get to GR: see http://lesswrong.com/lw/jo/einsteins_arrogance/757x (Wow, now that I'm trying to explain that, I realize that the difference between SR and GR in these respects are nowhere near as important as I was thinking.) Anyway, there's still no logical reason why those very simple assumptions have to be true; you still need experience to tell you they are. The comments to http://lesswrong.com/lw/jo/einsteins_arrogance/ go into more detail about this. Can you give me some pointers? I can't recall ever hearing about that before.
0Zaine12y
Thank you for the review! It makes a lot in the two wikipedia articles on special and general relativity easier to digest. I intend on thoroughly going over my notes this weekend so I can separate historical fact from interpretation, which are currently grouped together in my memory. I'll be able to do your response justice then.
0shminux12y
I'm not an expert in philosophy, but if we are talking physics, relativity, special or general, did not do anything of the sort you claim: "Einstein's theory of special relativity it was at the time considered finally possible to accurately and precisely predict the movements of bodies in our universe." If anything, the Newtonian mechanics had a better claim at determinism, at least until 19th century, when it became clear than electromagnetism comes with a host of paradoxes, not cleared up until both SR and QM were developed. Of course, this immediately caused more trouble than it solved, and I recall no serious physicist who claimed that it was " finally possible to accurately and precisely predict the movements of bodies", given that QM is inherently non-deterministic, SR showing that Newtonian gravity is incomplete. and GR was not shown to be well-posed until much later.
0Zaine12y
Thank you for your input. I also do not know of any serious physicist who asserted that causality had been finally and definitively solved by SR; from what I was taught, it was as I said more a widespread idea that the majority of the educated shared, rather than one person's assertion. Indeed, Hume's insight is more of a philosophical problem than a mathematical one. Hume showed that empiricism alone could never determine causality. Einstein's STR showed that causality can be determined empirically when aided by maths, a tool of the empiricist. It can be argued that STR does not definitively prove causality itself (perhaps very rightly so - again, I am not aware), however the salient point is that STR gave rise to the conception that Hume's insight had finally been resolved. To be clear, in order to resolve Hume's insight one only needed to demonstrate that through empiricism it is possible to establish causality.
5[anonymous]12y
The notion of Cause and Effect was captured mathematically, statistically and succinctly by Judea Pearl, empiricism is defined by Bayes Theorem.
2sakranut12y
I was referring to the dispute in the 17th and 18th centuries with Hume, Berkeley, and Locke on the empiricist side, and Descartes, Leibnitz, and Spinoza, on the rationalist Side, as described in this paper. Out of curiosity, what is the connection between atoms and causality?
0Zaine12y
Enlightening! Thank you for the paper. Sorry, it was Einstein's theory of special relativity that resolved Hume's insight, not atomic theory. Basically, Hume argued that if you see a ball X hit a ball Y, and subsequently ball Y begins rolling at the same speed of ball X, all one has really experienced is the perception of ball X moving next to ball Y and the subsequent spontaneous acceleration of ball Y. Infinity out of infinity times you may experience the exact same perception whenever ball X bumps into ball Y, but in Hume's time there was no empirical way to prove that the collision of ball X into ball Y caused the effect of the latter's acceleration. With this, you can. I'm afraid I can't answer in any more depth than that, as I myself don't understand the mathematics behind it. Anyone else?

Hi, LessWrong,

I used to entertain myself by reading psychology, and philosophy articles on Wikipedia and following the subsequent links. When I was really interested in a topic though, I used google to further find websites would provide me more information on said topics. Around late 2010, I found that some of my search results led to this very website. Less Wrong proved to be a little too dense for me to enjoy; I needed to fully utilize my cognitive capabilities to even begin to comprehend some of the articles posted here.

Since I was looking for enterta... (read more)

So I recently found LessWrong after seeing a link to the Harry Patter fanfiction, and I have been enthralled with the concept of rationalism since. The concepts are not foreign to me as I am a chemist by training, but the systematization and focus on psychology keep me interested. I am working my way through the sequences now.

As for my biography, I am a 29 year old laboratory manager trained as a chemist. My lab develops and tests antimicrobial materials and drugs based on selenium's oxygen radical producing catalysis. It is rewarding work if you can g... (read more)

Greetings!

I'll start with how I made my way here. Unsurprisingly, it was HPMOR. Perhaps even less surprisingly, said fanfic was recommended on Tumblr. After reading that excellent story and a couple of follow up fanfics, I decided that rational fics are the thing for me, and also that, as someone who desperately wants to write a good story, the underlying rationality is something that I needed to get a handle on. (Also, for a large portion of my life I've been obsessed with logic.)

I've acquired Rationality: From AI to Zombies, and am slowly working my way ... (read more)

Hi, Charlie here.

I'm a middle-aged high-school dropout, married with several kids. Also a self-taught computer programmer working in industry for many years.

I have been reading Eliezer's posts since before the split from Overcoming Bias, but until recently only lurked the internet -- I'm shy.

I broke cover recently by joining a barbell forum to solve some technical problems with my low-bar back squat, then stayed to argue about random stuff. Few on the barbell forum argue well -- it's unsatisfying. Setting my sights higher, I now join this forum.

I'll probably start by trying some of the self-improvement schemes and reporting results. Any recommendations re: where to start?

1CharlieDavies11y
Never mind, I found the Group rationality diary which is exactly the right aggregation point for self-improvement schemes.

Apologies in advance for the novella. And any spelling errors that I don't catch (I'm typing in notepad, among other excuses).
It's always very nice when I come across something that reminds me that there are not only people in the world who can actually think rationally, but that many of them are way better at it than me.
I don't like mentioning this so early in any introduction, but my vision is terrible to the point of uselessness; I mostly just avoid calling myself "blind" because it internally feels like that would be giving up on the tiny pow... (read more)

After having read all of the Sequences, I suppose its time I actually registered. I did the most recent (Nov 2012) survey. I'm doing my PhD in the genetics of epilepsy (so a neurogenetics background is implied). I'm really interested in branching out into the field of biases and heuristics, especially from a functional imaging and genetics perspective (my training includes EEG, MRI/fMRI, surgical tissue analysis, and all the usual molecular stuff/microarrays).

Experiences with grant writing makes me lean more toward starting my own biotech or research firm and going from there, but academics is an acceptable backup plan.

Hi, I’m Cinnia, the name I go by on the net these days. I found my way here by way of both HPMOR and Luminosity about 8 months ago, but never registered an account until the survey.

Like Alan, I’m also in my final year of secondary school, though I’m on the other side of the pond. I love science and math and plan to have a career in neuroscience and/or psychiatry after I graduate. This year I finally decided to branch out my interests a bit and joined the local robotics club (a part of FIRST, if anyone’s curious), and it’s possibly the best extracurricular... (read more)

0Bugmaster11y
What are "Riso and Hudson’s Enneagram and Spiral Dynamics", out of curiosity ? I Googled the terms, but didn't see anything that I could immediately relate to Less Wrong, hence my curiosity.
2Cinnia11y
My apologies for not making it clearer. The Enneagram and Spiral Dynamics are two entirely separate subjects, though both related to psychology. At least one other user here knows about the Enneagram, — Mercurial, I think — though I'm not sure if anyone knows about the Spiral. The Enneagram is a model for human personality types and the Spiral is theory of evolutionary psychology. Personally, the way I've learned the Enneagram is from this book, with help from another person who is far more knowledgeable than I am. That same person helped me to understand the Spiral and didn't teach me with books, so I'm afraid I can't refer you to any particular resources, though I assure you there's plenty out there. Don Beck, who wrote a book on it in the late nineties, is the name that usually comes up whenever people talk about it, though.
0Bugmaster11y
Thanks for the info !
0Alicorn11y
Welcome! I like it when people come here by way of my stuff :)
0Cinnia11y
Thanks! Reading Luminosity and Radiance helped me move on from most of the disgust and anger I harbored toward the original series, and after reading the other posts on luminosity, I'm starting to observe and monitor my thoughts and actions more often.

Hi, I'm Alan, a student in my final year of secondary school in London, England. For some reason I'm finding it hard to remember how and when I stumbled upon Less Wrong. It was probably in March or April this year, and I think it was because Julia Galef mentioned it at some point, thought I may be misremembering.

Anyway, I've now read large chunks of the Sequences (though I can never remember which bits exactly) and HPMOR, and enjoy reading all the discussion that goes on here. I've never registered as a user before as I've never felt the burning need to c... (read more)

Hello everyone, I'm Luc, better known on the web as lucb1e. (I prefer not to advertise my last name for privacy reasons.) I'm currently a 19 year old student, doing application development in Eindhoven, The Netherlands.

Like Aaron Swartz, I meant to post in discussion but don't have enough karma. I've been reading articles from time to time for years now, so I think I have an okay idea what fits on this site.

I think I ended up on LessWrong originally via Eliezer's NPC story. After reading that I looked around on the site, read about the AIBox experiment (wh... (read more)

[-][anonymous]11y70

Well, I haven't really figured out what you all need to know about me, but I suppose there must be something relevant. Let's start with why I'm here.

I can remember being introduced to Less Wrong in two ways, though I don't know in what order. One was through HPMoR, and the other through a post about Newcomb's problem. Neither of those really brought me here in a direct way, though. I guess I am here based on the cumulative sum of recommendations and mentions of LW made by people in my social circle, combined with a desire for new reading material that i... (read more)

Hi everyone!

I'm a theoretical physicist from Germany. My work is mostly about the foundations of quantum theory, but also information theory and non-commutative geometry. Currently I'm working as head of research in a private company.

As a physicist I have been confronted with all sorts of (semi-) esoteric views about quantum theory and its interpretation, and my own lack of a better understanding got me started to explore the fundamental questions related to understanding quantum theory on a rational basis. I believe that all mainstream interpretations h... (read more)

2NancyLebovitz11y
Welcome to Less Wrong! I'm interested in your idea that quantum theory doesn't have to be interpreted.
0aotell11y
Thanks Nancy! Have you checked out the posts at my blog? I don't know about your background, but maybe you will find them helpful. If you would like to have a more accessible break down then I can write something here too. In any case, thank you for your interest, highly appreciated!
0Mitchell_Porter11y
From your blog and your paper, your idea seems to be that the quantum state of the universe is a superposition, but only one branch at a time is ever real, and the selection of which branch will become real at a branching is nondeterministic. Well, Bohmian mechanics gets criticised for having ghost wavepackets in its pilot wave - why are they less real than the wavepackets which happen to be guiding the classical system - and you must be vulnerable to the same criticism. Why aren't the non-dominant branches (page 11) just as real as the dominant branch?
0aotell11y
Thank you for your feedback Mitchell, I'm afraid you have not understood the paper correctly. First, if a system is in a superposition depends on the basis you use to expand it, it's not a physical property but one of description. The mechanism of branching is actually derived, and it doesn't come from superpositions but from eigenstates of the tensor factor space description that an observer is unable to reconstruct. The branching is also perfectly deterministic. I think your best option to understand how the dominance of one branch and the non-reality of the others emerges from the internal observation of unitary evolution is to work through my blog posts. I try to explain precisely where everything comes from and why it has to follow. The blog is also more comprehensible than the paper, which I will have to revise at some point. So please see if you can more make sense of it from my blog, and let me know if you still understand what I'm trying to say there. Unfortunately the precise argument is too long to present here in all detail.
0aotell11y
I think it will be helpful if I briefly describe what my approach to understanding quantum theory is, so that you can put my statements in the correct context. I assume a minimal set of postulates, namely that the universe has a quantum state and that this state evolves unitarity, generated by the strictly local interactions. The usual state space is assumed. Specifically, there is no measurement postulate or any other postulates about probability measures or anything like that. Then I go on to define an observer as a mechanism within the quantum universe that is realized locally and gathers information about the universe by interacting with it. With this setup I am able to show that an observer is unable to reconstruct the (objective) density operator of a subsystem that he is part of himself. Instead he is limited to finding the eigenvector belonging to the greatest eigenvalue of this density operator. It is then shown that the measurement postulate follows as the observer's description of the universe, specifically for certain processes that evolve the density operator in a way that changes the order of the eigensubspaces sorted by their corresponding eigenvalues. That is really all. There are no extra assumptions whatsoever. So if the derivation is correct then the measurement postulate is already contained in the unitary structure (and the light cone structure) of quantum theory.
3Mitchell_Porter11y
As you would know, the arxiv sees several papers every month claiming to have finally explained quantum theory. I would have seen yours in the daily listings and not even read it, expecting that it is based on some sort of fallacy, or on a "smuggled premise" - I mean that the usual interpretation of QM will be implicitly reintroduced (smuggled into the argument) in how the author talks about the mathematical objects, even while claiming to be doing without the Born rule. For example, it is very easy for this to happen when talking about density matrices. It is a tedious thing to go through a paper full of mathematics and locate the place where the author makes a conceptual mistake. It means you have to do their thinking for them. I have had another look at your paper, and seen a little more of how it works. Since you are here and wanting to promote your idea, I hope you will engage with me even if I am somewhat "lazy", in the sense that I haven't gone through the whole thing and understood it. So first of all, a very simple issue that you could comment on, not just for my benefit but for the benefit of anyone who wants to know what you're saying. An "observer" is a physical being who is part of the universe. The universe is described by a quantum state vector. The evolution of the state vector is deterministic. How do you get nondeterministic evolution of the observer's state, which ought to be just a part of the overall state of the universe? How do you get nondeterminism of the part, from determinism of the whole? We know how this works in the many-worlds interpretation: the observer splits into several copies that exist in parallel, and the "nondeterminism" is just an individual copy wondering why it sees one eigenvalue rather than another. The copy in the universe next door is thinking the same thing but with a different eigenvalue, and the determinism applies at the multiverse level, where both copies were deterministically produced at the same time. That's
1aotell11y
I see it exactly like you. I too see the overwhelming number of theories that usually make more or less well hidden mistakes. I too know the usual confusions regarding the meaning of density matrices, the fallacies of circular arguments and all the back doors for the Born rule. And it is exactly what drives me to deliver something that is better and does not have to rely on almost esoteric concepts to explain the results of quantum measurements. So I guarantee you that this is very well thought out. I have worked on this very publication for 4 years. I flipped the methods and results over and over again, looked for loopholes or logical flaws, tried to improve the argumentation. And now I am finally confident enough to discuss it with other physicists. Unfortunately, you are not the only physicist that has developed an understandable skepticism regarding claims like I make. This makes it very hard for me to find someone who does exactly what you describe as being hard work, thinking the whole thing through. I'm in desperate need of someone to really look into the details and follow my argument carefully, because that is required to understand what I am saying. All answers that I can give you will be entirely out of context and probably start to look silly at some point, but I will still try. I do promise that if you take the time to read the blog (leave the paper for later) carefully, you will find that I'm not a smuggler and that I am very careful with deduction and logic. To answer your questions, first of all it is important that the observer's real state and the state that he assumes to be in are two different things. The objective observer state is the usual state according to unitary quantum theory, described by a density operator, or as I prefer to call them, state operator. There is no statistical interpretation associated with that operator, it's just the best possible description of a subsystem state. The observer does not know this state however, if he
0Mitchell_Porter11y
Here's another question. Suppose that the evolving wavefunction psi1(t), according to your scheme, corresponds to a sequence of events a, b, c,... and that the evolving wavefunction psi2(t) corresponds to another sequence of events A, B, C... What about the wavefunction psi1(t)+psi2(t)?
1aotell11y
You really come up with tricky questions, good :-). I think there are several ways to understand your questions and I am not sure which one was intended, so I'll make a few assumptions about what you mean. First, an event is a nonlinear jump in the time evolution of the subjectively perceived state. The objective global evolution is still unitary and linear however. In between the perceived nonlinear evolution events you have ordinary unitary evolution, even subjectively. So I assume you mean the subjective states psi1(t) and psi2(t). The answer is then that in general superpositions are not valid subjective evolutions anymore. You can still use linearity piecewise between the events, but the events themselves don't mix. There are exceptions, when both events happen at the same time and the output is compatible, as in can be interpreted as having measured an subspace instead of a single state, which requires mutual orthogonality. So in other words: In general there is no global state that would locally produce a superposition if there are nonlinear local events. However if you mean that psi1 and psi2 are the global states that produce a list of events a,b,c and A,B,C respectively and you add up those, then the locally reconstructed state evolution will get complicated. If you add with coefficients psi(t) = c1 psi1(t) + c2 psi2(t) then you will get the event sequence a,b,c for |c1|>>|c2| and the sequence A,B,C for |c2|>>|c1|. What happens in between depends on the actual states and how their reduced state eigenspaces interact. You may see an interleaved mix of events, some events may disappear or you may see a brand new event not there before. I hope this answers your questions.
-2Mitchell_Porter11y
I find your reference to "the subjectively perceived state" problematic, when the physical processes you describe don't contain a brain or even a measuring device. Freely employing the formal elements and the rhetoric of the usual quantum interpretation, when developing a new one supposedly free of special measurement axioms and so forth, is another way for the desired conclusion to enter the line of reasoning unnoticed. In an earlier comment you talk about the "objective observer state", which you describe as the usual density operator minus the usual statistical interpretation. Then you talk about "reality for the observer" as "the eigenstate of the density operator with the greatest eigenvalue", and apparently time evolution "for the observer" consists of this dominant eigenstate remaining unchanged for a while (or perhaps evolving continuously if the spectrum of the operator is changing smoothly and without eigenvalue crossings?), and then changing discontinuously when there is a sharp change in the "objective state". Now I want to know: are we really talking about states of observers, or just of states of entities that are being observed? As I said, you're not describing the physics of observers, you're not even describing the physics of the measurement apparatus; you're describing simple processes like scattering. So what happens if we abolish references to the observer in your vocabulary? We have physical systems; they have an objective state which is the usual density operator; and then we can formally define the dominant eigenstate as you have done. But when does the dominant eigenstate assume ontological significance? For which physical systems, under which circumstances, is the dominant eigenstate meaningful - brains of observers? measuring devices? physical systems coupled to measuring devices?
1aotell11y
Your question is absolutely valid and also important. In fact, most of what I write in my paper and the blog is about answering precisely this. My observer is well defined, as a mechanism that is part of a quantum system and who interacts with the quantum system to gather information about it. He is limited by the locality of interaction and the unitary nature of the evolution. I imagine the observer to be a physicist, who tries to describe the universe mathematically, based on what he sees. But that is only a trick in order to have a mathematical formulation of the subjective view. The observer is prototypical for any mechanism that tries to create a model of his surrounding. This approach is very different from modeling cognitive mechanisms, and it's also much more general. The information restriction is so fundamental that you can talk about his subjective reconstruction of what is going on as local subjective reality, as everyone has to share it. The meaning of the dominant eigensubspace is then derived from this assumption. Specifically, I am able to identify a non-trivial transformation on the objective density operator of the observer's subsystem that he cannot gain any knowledge about. This transformation creates a class of equivalent representations that are all equally valid descriptions which the observer could use for making a model of his environment (and himself). The arbitrariness of the representation connected with this reconstruction however forces him to reduce his state description to something more elementary, something that all equivalent descriptions have in common. And that turns out to be the dominant eigensubspace as his best option. This point is very important, and the derivation I provide in the blog is rigorous and detailed. The result is that the subjective reality as reconstructed by any observer like this evolves unitarily if the greatest eigenvalue does not intersect with other eigenvalues (the observer himself cannot know the val
0Mitchell_Porter11y
I finally got as far as your main calculation (part IV in the paper). You have a two-state quantum system, a "qubit", and another two-state quantum system, a "photon". You make some assumptions about how the photon scatters from the qubit. Then you show that, given those assumptions, if the coefficients of the photon state are randomly distributed, then applying the Born rule to the eigenvalues of the old "objective state" (density operator) of the qubit, gives the probabilities for what the "dominant eigenstate" of the new objective state of the qubit will be (i.e. after the scattering). My initial thoughts are 1) it's still not clear that this has anything to do with real physical processes 2) it's not surprising that an algebraic combination of quantum coefficients with random variables is capable of yielding new random variables with a Born-rule distribution 3) if you try to make this work in detail, you will end up with a new modification of quantum mechanics - perhaps a stochastic, piecewise-linear Bohmian mechanics, or just a new form of "objective collapse" theory - and not a derivation of the Born rule from within quantum mechanics. Are you saying that actual physical systems contain populations of photons with randomly distributed coefficients such as you describe? edit Or perhaps just that this is a feature of electromagnetically mediated measurement interactions? It sounds like a thermal state, and I suppose it's plausible that localized thermal states are generically involved in measurement interactions, but these details have to be addressed if anyone is to understand how this is related to actual observation.
0aotell11y
There must be something that you have fundamentally misunderstood. I will try to clear up some aspects that I think may cause this confusion. First of all, the scattering processes presented in the paper are very generic to demonstrate the range of possible processes. The blog contains a specific realization which you may find closer to known physical processes. Let me explain in detail again what this section is about, maybe this will help to overcome our misunderstanding. A photon scatters on a single qubit. The photon and the qubit each bring in a two dimensional state space and the scattering process is unitary and agrees with conservation laws. The state of the qubit before the interaction is known, the state of the photon is external to the observer's system and therefore entirely unknown, and it is independent of the state of the qubit. The result of the scattering process is traced over the external outgoing photon states to get a local objective state operator. You then write I apply the Born rule, but that's really exactly what I don't do. I use the earlier derived fact that a local observer can only reconstruct the eigenstate with the greatest eigenvalue. This will result in getting either the qubit's |0> or |1> state. In order to get the exact probability distribution of these outcomes you have to assume exactly nothing about the state of the photon, because it is entirely unknown. If you assume nothing then all polarizations are equally likely, and you get an SU(2) invariant distribution of the coefficients. That's all. There are no assumptions whatsoever about the generation of the photons, them being thermal or anything. Just that all polarizations are equally likely. This is a very natural assumption and hard to argue against. The result in then not only the Born rule but also an orthogonal basis which the outcomes belong to. So if you accept the derivation that the dominant eigensubspace is the relevant state description for a local internal ob
1Mitchell_Porter11y
I understand that you have an algebraic derivation of Born probabilities, but what I'm saying is that I don't see how to make that derivation physically meaningful. I don't see how it applies to an actual experiment. Consider a Stern-Gerlach experiment. A state is prepared, sent through the apparatus, and the electron is observed coming out one way or the other. Repeat the procedure with identical state preparation, and you can get a different outcome. For Copenhagen, this is just a routine application of the Born rule. Suppose we try to explain this outcome using decoherence. Well, now we are writing a wavefunction for the overall system, measuring device as well as measured object, and we can show that the joint wavefunction splits into two parts which are entirely decohered for all practical purposes, corresponding to the two different outcomes. But you still have to apply the Born rule to "obtain" a specific outcome. Now how does your idea explain the facts? I really don't see it. At the level of wavefunctions, each run of the experiment is the same, whether you look at just the wavefunction of the individual electron, or at the joint wavefunction of electron plus apparatus. How do we get physically different outcomes? Apparently it requires these random scattering events, that do not feature at all in the usual analysis of the experiment. Are you saying that the electron that has passed through the Stern-Gerlach apparatus is really in a superposition, but for some reason I only see it as being located in one place, because that's the "dominant eigenstate"? Does this apply to the whole apparatus as well - really in a superposition, but experienced as being in a definite state, not because of decoherence, but because of scattering + my epistemic limitations??
1aotell11y
This would be a lot simpler if you weren't avoiding my questions. I have asked you whether you have understood and accept the derivation of the dominant eigenstate as the best possible description of the state of a system that the observer is part of. I have also asked if you have read my blog from the beginning, because I need to know where your confusion about what I am saying comes from. The Stern Gerlach experiment goes like this in my theory: The superposition of the spins of the silver atoms must be collapsed already at the moment the beam splits up, because a much later collapse would create a continuous position distribution. That also means a Copenhagen-like act of observation cannot happen any later, specifically not at a screen. This is a good indication that not observation itself forces the silver atoms to localize but something else, that relates to observation but is not the act of looking at it. In the system that contains the experiment and the observer, the observer would always "see" a state that belongs to the dominant eigenstate of the objective state operator of that system. It doesn't really matter if in that system the observer is entangled with the spin state or not. As soon as you apply the field to separate the silver atoms you also create an energy difference (which is also flight time dependent and scans through a rather large range of possible resonant frequencies). The photons in the environment that are out of the observer's direct observation and unknown to him begin to interact with the two spin states, and some do in a way that creates spin flips, with absorption and stimulated emission, or just shake the atom a little bit. The sum of these interactions can create a total unitary evolution that creates two possible eigenvectors of the state operator, one containing each spin z-eigenstate and a probability for each to be the dominant eigenstate that goes conform with the Born rule. That includes the assumption that the photon state
2Mitchell_Porter11y
Earlier, I should have referred to the calculation as being in part IV, not part V. I've read part V only now - including the stuff about "branch switching" and how "The observer can switch between realities without even noticing, because all records will agree with the newly formed reality." When I said these ideas led towards "stochastic, piecewise-linear Bohmian mechanics", I was more right than I knew! Bohmian mechanics is rightly criticised for supposedly being just a single-world theory, yet having all those other world-branches in the pilot wave. If your account of reality includes wavefunctions with seriously macroscopic superpositions, then you either need to revise the theory so it doesn't contain such wavefunctions, or you need to embrace some form of many-world-ism. Supposing that "hidden reality branches" exist, but don't get experienced until your personal stream-of-consciousness switches into them, is juvenile solipsism. If that is where your theory leads, then I have little interest in continuing this discussion. I was suspicious from the beginning about the role that the "subjectively reconstructed state of the universe" was playing in your theory, but I didn't know exactly what was going on. I had hoped that by discussing a particular physical setup (Stern-Gerlach), we would get to see your ideas in action, and learn how they work by demonstration. But now it seems that your outlook boils down to quantum dualism in a virtual multiverse. There is a subjective history which is a series of these "dominant eigenstates", plucked from a superposition whose other branches are there in the wavefunction, but which aren't considered fully real unless the subjective history happens to jump to them. There is some slim possibility that your basic idea could play a role in the local microscopic dynamics of a new theory, distinct from quantum mechanics but which produces quantum mechanics in a certain limit. Or maybe it could be the basis of a new type of many
0aotell11y
You keep ignoring the fact that the dominant eigenstate is derived from nothing but the unitary evolution and the limitations of the observer. This is not a "new theory" or an interpretation of any kind. Since you are not willing to discuss that part your comments regarding the validity of my approach are entirely meaningless. You criticize my work based on the results which are not to your liking, and not with respect to the methods used to obtain these results. So I beg you one last time, let us rationally discuss my arguments, and not what you believe is a valid result or not. If you can show my arguments to be false beyond any doubt, based on the arguments that I use in my blog, or alternatively, if you can point out any assumptions that are arbitrary or not well founded I will accept your statement. But not like this. If you claim to be a rationalist then this is the way to go. Any other takers out there who are willing to really discuss the matter without dismissing it first? Edit : And just for the record, this has absolutely nothing to do with Bohmian mechanics. There is no extra structure that contains the real outcomes before measurement or any such thing. The only common point is the single reality. Furthermore, your quote of page 11 leaves out an important fact. Namely that the switching occurs only for the very short time history where the dominant eigenstates interact and stabilizes for the long term, meaning within a few scattering events of which you probably experience billions every second. There is absolutely no way for you to switch between dominant eigenstates with different memories regarding actual macroscopic events.
0shminux11y
Have fun :) I'll see if I can make sense of your blog.

Howdy, I'm a math grad student.

I discovered Less Wrong late last night when a friend linked to a post about enjoying "mere" reality, which is a position I've held for quite some time. That post led me to a couple posts about polyamory and Bayesianism, which were both quite interesting, and I say this as someone familiar with each topic.

Although I've read bits & pieces of Harry Potter & the Methods of Rationality, it wasn't until I browsed through this thread that I realized it was assembled here.

I will freely admit that I tend to be a bit... (read more)

2MBlume12y
Re: the SMBC strip, I remember tutoring physics in college, and being surprised that my students (all pre-med) had memorized constants I still routinely looked up.
0FiftyTwo12y
Interestingly doing physics undergrad I never memorised constants, but was annoyed that the only way to succeed in tests was to memorise formulae. Contrastingly you can understand how the systems work which I felt gave a more important level of understanding (e.g. you can fairly intuitively see how things work with momentum, acceleration etc, and with a bit more effort get relativity). Though I suspect in retrospect my main motivation was annoyance that people I felt were less clever or understood less did better by putting in more work memorising and practising than me.
1FiftyTwo12y
What is your area of research/interest in Mathematics?
0Spinning_Sandwich12y
I'm primarily interested in number theory, but I have a great deal of interest in analysis generally (more pure analytic things than anything numerical), which originally developed since it arises from set theory quite directly. I regret that I have never had direct access to a working logician. I wouldn't say that I have a research area yet, but I expect it will be in either algebraic number theory or PDE. I guess I'm in a rather small group of people who can say that with a straight face, since they're on opposite ends of the spectrum.
0FiftyTwo12y
Coincidentally I am in the process of writing my final advanced logic assignment as we speak (I wouldn't call myself a working logician as a) I'm undergrad, and b) rarely working). My module focuses on the lead up to Godels incompleteness theorem, so overlaps with set theory related stuff a lot. I might be able to answer some general questions but no guarantees. Know how you feel about doing very different things simultaneously, done both political philosophy and logic recently, odd shift of gears. Random question You wouldn't know how to show the rand of an increasing total recursive function is a recursive set would you? Or why if a theory has a arbitrarily large finite model it has an infinite model? Odd thing about doing high level stuff is realising that the infrastructure you get used to doing lower level stuff (wikipedia articles, decent textbooks, etc.) ceases to exist. I feel increased sympathy for people pre information age.
0Spinning_Sandwich12y
You'd have to explain what the rand function is, since that is apparently an un-Google-able term unless you want Ayn Rand (I don't), the C++ random return function, or something called the RAND corporation. The second question is due to compactness. I'm the kind of person who reads things like Fixing Frege for fun after prelims are over. Edit: Oh, & I don't mean to be rude, but I probably wouldn't call anyone a working mathematician/logician unless they were actively doing research either in a post-doc/tenure position or in industry (eg at Microsoft).
0FiftyTwo12y
Ah sorry meant "range" not "rand," nevermind think I got it. [I apologise for shamelessly pumping you for question answers.] As for Ayn, no-one does. Would you recommend "Fixing Frege?" Think I've read bits and pieces of Burgess before but it never made a massive impact. I'd agree with you on the definition of working logician, the post docs and lecturers I've worked with are on a completely different level from even the smartest student. Not quite thousand year old vampire level but the same level of difference as a native language speaker and a learner.
0Spinning_Sandwich12y
It helps that generally (ie unless you're at Princeton/Cambridge/etc) the faculty at a given school will have come from much stronger schools than the grad students there, and similarly for undergrads/grads. And by "helps" I mean that it helps maintain the effect while explaining it, not that it helps the students any. As far as the range of a recursive function goes, isn't that the very definition of a recursive set? I'm definitely enjoying Fixing Frege. This is the third Burgess book I've read (Computability & Logic and Philosophical Logic being the other two), and when it's just him doing the writing, he's definitely one of the clearest expositors of logic I've ever read. Apparently, he also gets chalk all over his shirt when he lectures, but I've never seen this first-hand.
0Decius12y
Hey, if you need more than 2 sig figs from a calculation, you shouldn't be doing it manually anyway.
0Spinning_Sandwich12y
I say if you need an explicit computation with nonintegral coefficients, you shouldn't be working in that area anyway.
0Decius12y
For a short 45 degree offset: Take the length, in inches, of the offset: add half of that number to it, then subtract 1/16" for each full inch. To convert inches to decimal feet: 0" is 0.00 feet 3" is .25 feet, 6" is .50 feet, 9" is .75 feet, 12 inches is 1.00 feet. Select the closest one of these, and then add or subtract .01 feet for each 1/8th inch. To convert decimal feet to fractional inches, select the closest quarter and then add or subtract 1/8th inch for each .01 foot above or below.

My name is Chris Roberts. Professionally, my background is finance, but I have always been fascinated by science and have tried to apply a scientific approach to my thought and discussions. I find far too much thinking dominated by ideology and belief systems without any supporting evidence (let alone testable hypotheses). Most people seem to decide their positions first, then marshal arguments to justify their prejudigments. I have never considered myself a "rationalist", but rather an empiricist. I believe in democracy, the free market and... (read more)

Hello, I'm a 21 year old undergraduate student studying Economics and a bit of math on the side. I found LessWrong through HPMOR, and recently started working on the sequences. I've always been torn between an interest in pure rational thinking, and an almost purely emotional / empathetic desire for altruism, and this conflict is becoming more and more significant as I weigh options moving forward out of Undergrad (Peace Corp? Developmental Economics?)... I'm fond of ellipses, Science Fiction novels and board games - I'll keep my interests to a minimum her... (read more)

7Kawoomba11y
Those are not at all at odds. Read e.g. Why Spock is Not Rational, or Feeling Rational. Relevant excerpts from both: and Your purely emotion / empathetic desire for altruism governs setting your goals, your pure rational thinking governs how you go about reaching your goals. You're allowed to be emotionally suckered, eh, influenced into doing your best (instrumental rationality) to do good in the world (for your values of 'good')!
4PhDre11y
Thank you for the reading suggestions! Perhaps my mind has already packaged Spock / lack of emotion into my understanding of the concept of 'Rationality.' To respond directly - Though if pure emotion / altruism sets my goals, the possibility of irrational / insignificant goals remains, no? If for example, I only follow pure emotion's path to... say... becoming an advocate for a community through politics, there is no 'check' on the rationality of pursuing a political career to achieve the most good (which again, is a goal that requires rational analysis)? In HPMoR, characters are accused of being 'ambitious with no ambition' - setting my goals with empathetic desire for altruism would seem to put me in this camp. Perhaps my goal, as I work my way through the sequences and the site, is to approach rationality as a tool / learning process of its own, and see how I can apply it to my life as I go. Halfway through typing this response, I found this quote from the Twelve Virtues of Rationality:
1Kawoomba11y
There is no "correct" way whatsoever in setting your terminal values, your "ultimate goals" (other agents may prefer you to pursue values similar to their own, whatever those may be). Your ultimate goals can include anything from "maximize the number of paperclips" to "paint everything blue" to "always keep in a state of being nourished (for the sake of itself!)" or "always keep in a state of emotional fulfillment through short-term altruistic deeds". Based on those ultimate goals, you define other, derivative goals, such as "I want to buy blue paint" as an intermediate goal towards "so I can paint everything blue". Those "stepping stones" can be irrational / insignificant (in relation to pursuing your terminal values), i.e. you can be "wrong" about them. Maybe you shouldn't buy blue paint, but rather produce it yourself. Or rather invest in nanotechnology to paint everything blue using nanomagic. Only you can (or can't, humans are notoriously bad at accurately providing their actual utility functions) try to elucidate what your ultimate goals are, but having decided on them, they are supra-rational / beyond rational / 'rational not applicable' by definition. There is no fault in choosing "I want to live a life that maximizes fuzzy feelings through charitable acts" over "I'm dedicating my life to decreasing the Gini index, whatever the personal cost to myself."

Hello,

I found this site via HPMOR, which was the most awesome book I have read for several years. Besides being awesome as a book there were a lot of moments during reading I thought wow, there is someone who really thinks quite like myself. (Which is unfortunately something I do not experience too often.) Thus I was interested in who the author of HPMOR is, so I googled “less wrong”.

This site really held what HPMOR promised, so I spend quite some time reading through many articles absorbing a lot of new and interesting concepts.

Regarding my own person, ... (read more)

Long-time lurker, first-time poster. I'm 21, male, and a college student majoring in economics and minoring in CS. I first heard of Eliezer Yudkowsky when a couple of my friends discovered Harry Potter and the Methods of Rationality two years ago. I started reading it and enjoyed it immensely at first, but as the plot eclipsed what I'd call the "cool tricks", I became less interested and dropped it. More recently, a different friend linked me to Intellectual Hipsters. After reading it, I read several sequences and was hooked.

My journey to rationa... (read more)

I wandered onto this site, read an article, read some interesting discussion on it, and decided to take the survey. The survey had some interesting discussion and I enjoyed the extra credit, which I did the majority of, with an exception of the IQ test I couldn't get to work right and will do later. I enjoyed the discussion I read, though, and decided this would be an interesting site to read more on. I don't know yet how much discussion I'll contribute, but when I see an interesting discussion I'm sure I'll join in.

I don't have too much to say about myse... (read more)

Hi Guys,

I found out about this place from Methods of Rationality and have been reading the sequences for a few months now. I don't have a background in science or mathematics (just finished reading law at university) so I've yet to get to the details of Bayes but I've been very intrigued by all the sequences on cognitive bias, and this site was the trigger for me becoming interested in the mind-blowing realities of evolution and prompted me finally pulling my finger out and shifting from non-thinking agnosticm to atheism.

I'm still adjusting but I feel this site has already helped start to clean up my thinking, so thanks to everyone for making coming here such a life-changing experience.

David

I used to have a different account here, but I wanted a new one with my real name so I made this one.

I study computer and electrical engineering at the University of Nebraska-Lincoln, though I'm not finding it very gratifying (rationalists are rare creatures around here for some reason), and I'm trying as hard as I can to find some other way to get paid to code/think so I can drop out. Here's my occasionally-updated reading list, and my favorite programming language is Clojure.

Peter here,

I stumbled onto LW from a link on TvTropes about the AI Box experiment. Followed it to an explanation of Bayes' Theorem on Yudowsky.net 'cause I love statistics (the rage I felt knowing that not one of my three statistics teachers ever mentioned Bayes was an unusual experience).

I worked my way through the sequences and was finally inspired to comment on Epistemic Viciousness and some of the insanity in the martial arts world. If your goal is to protect yourself from violence, martial arts is more likely to get you hurt or thrown in jail.

It seems... (read more)

Hellooo! I de-lurked during the survey and gradually started rambling at everyone but I never did one of these welcome posts!

My exposure to rationality started with idea that your brain can have bugs, which I had to confront when I was youngish because (as I randomly mentioned) I have a phobia that started pretty early. By then I had fairly accurate mental models of my parents to know that they wouldn't be very helpful/accommodating, so I just developed a bunch of workarounds and didn't start telling people about it until way later. The experience helped ... (read more)

0CCC11y
Fortunately, it's also very easy to get a basic grip on it. Multiplication, addition, and a few simple formulae can lead to some very interesting results. A probability is always written as a number between 0 and 1, where 1 is absolute certainty and 0 cannot happen in any circumstances at all, no matter how unlikely. A one in five chace is equal to a probablity of 1/5, or 0.2. The probability that event E, with probability P, is false is 1-P. The chances of independent events E and F, with probabilities P and Q, occurring in succession is P*Q. (This leads to an interesting result if you try to work out the odds of at least two people in a crowd sharing a birthday) Probability theory also involves a certain amount of counting. For example; what are the chances of rolling a seven with two ordinary, six-sided dice? (Assuming that the dice are fair, and not weighted). Each dice has a one-in-six chance of showing any particular number. For a given pair of numbers, that's 1/6*1/6=1/36. And, indeed, if you list the results you'll find that there are 36 pairs of numbers that could turn up: (1, 1), (1, 2), (2, 1), (1, 3)... and so on. But there's more than one pair of numbers that adds up to 7; (2, 5) and (1, 6), for example. So what are the odds of rolling a 7 with a pair of dice?
0jooyous11y
Yeah, it's the counting problems that I've been avoiding! Because there are some that seem like you've done them correctly and someone else does it differently and gets a different answer and they still can't point out what you did wrong so you never quite learn what not to do. And then conditional probabilities turn into a huge mess because you forget what's given and what isn't and how to use it togetherrrr. I hope it's a sixth, but at least this question is small enough to write out all the combinations if you really have to. It's the straight flushes and things that are murder.
1CCC11y
Ah, I see. You'll be glad to know that there are often ways to shortcut the counting process. The specifics often depend on the problem at hand, but there are a few general principles that can be applied; if you give an example, I'll have a try at solving it. It is, indeed.
0Bugmaster11y
In fact, many if not most concepts in probability theory deal with various ways of avoiding the counting process. It gets way too expensive when you start handling billions of combinations, and downright impossible when you deal with continuous values.
0jooyous11y
asdfjkl; I wrote out all the pairs. -_- Can't trust these problems otherwise! Grumble.
0arundelo11y
"You are never too cool to draw a picture" -- or make a list or a chart. This particular problem is well served by a six-by-six grid.
1jooyous11y
Dice are okay; it's the problems with cards that get toooo huge. :)
0Qiaochu_Yuan11y
Can you give an example?
0jooyous11y
I will try to hunt one down! It's usually the problems where you have to choose a lot of independent attributes but also be careful not to double-count. Also, when someone explains it, it's clear to see why their way is right (or sounds right), but it's not clear why your way is wrong.
0Qiaochu_Yuan11y
Yes, I notice that people are in general either bad at giving or reluctant to give this kind of feedback. I think I'm okay at this, so I'd be happy to do this by PM for a few problems if you think that would help.

Hello everyone!

My personal and professional development keep leading me back to the LessWrong sequences, so I've gathered up enough humility to join in the discussions. I hope to meet your high standards.

I'm 27 and my background is in business and the life sciences; I see rationality as a critically important tool in these areas, but ultimately a relatively minor tool for life as a successful human animal. As such I see this community as being similar to a bodybuilding/powerlifting community, where the interest is in training the rational faculty instead of physical strength.

Edit: Wow, all my comments downvoted? That's a pretty strongly negative response. Care to explain?

5CoffeeStain11y
From what I can see, people probably thought you were belaboring a point which was not a part of the discussion at hand. You said you were answering the moral value of "there exists 3^^^3 people AND..." versus the situation without that prefix, but people discussing it did not take that interpretation of the problem, nor did Eliezer when he asked it. You might say that to determine the value of 3^^^3 people getting specks in their eye you would have to presuppose it included the value of them existing, but nobody was discussing that as if it were part of the problem. It sucks, yeah, but the way that people prefer to have discussions wins out, and you can but prefer it or not, or persuade in the right channels. A good lesson to learn, and don't be discouraged.
1khriys11y
Thank you.

Greetings! I am Viktor Brown (please do not spell Viktor with a c), and I tend to go by deathpigeon (please do not capitalize it or spell pigeon with a d) on the internet. (I cannot actually think of a place I don't go by deathpigeon...) I'm currently 19 years old. I'm unemployed and currently out of school since my parents cut off me off for paying for school. I consider myself to be a rationalist, a mindset that comes from how I was raised rather than any particular moment in my life. When I was still in university, I was studying computer science, a sub... (read more)

3ialdabaoth11y
ouch... who the hell downvotes a greeting post?

Hi everyone!

Well, I'm new-ish here, and this site is really big, so I was wondering where I should start, like, which articles or sequences I should read first?

Thanks!

0[anonymous]11y
Are there any topics you're particularly interested in?

Howdy. My name is Alexander. I've read a lot of LW, but only recently finally registered. I learned about LW from RationalWiki, where I am a mod. I have read most of the sequences, and many of them are insightful, although I am skeptical about the utility of such posts as the Twelve Virtues, which seeks to clothe a bit of good advice in the voluminous trappings of myth. HPMOR is also good. I don't anticipate engaging in much serious criticism of these things, however, because I have little experience in the sciences or mathematics, and often struggle... (read more)

[This comment is no longer endorsed by its author]Reply

First of all, I encourage you to take advantage of the counseling and psychological services available to you on campus, if you have not already done so. They're very familiar with psychological pain.

Second, I encourage you to go to a Less Wrong meetup when you get the chance. There's a good chance you'll find people there who are as smart as you and who care about some of the same things you care about. There are listings for meetups in Toronto, Albany, and New York City. I can personally attest that the NYC meetup is great and exists and has good people.

Finally, I wish I could point you to resources that are especially appropriate for trans people, but I don't know what they are.

I really hope that you will be okay.

-2sparkles11y
7khafra11y
I know there's at least 3 MtF semi-regulars on this board, and one more who turned down Aubrey de Grey for a date once; so it's not like you're alone here. But I agree with Kawoomba that there are resources focused more closely on your problems than a forum on rationality, and these will help better and quicker. If you cannot intellectually respect anyone there enough that talking would help, Shannon Friedman does life coaching (and Yvain is on the last leg of his journey to becoming a psychiatrist). If there's a sequence that would directly help you, it's probably Luminosity.
-2sparkles11y
5gwern11y
RIT can be a pretty miserable place in the winter, I know from personal experience. Maybe you have some seasonal affective disorder in addition to your other issues? Vitamin D in the morning and melatonin in the evening might help, and of course exercise is good for all sorts of mood related issues - so joining one of the clubs might be a good idea, or take a class like fencing (well, I enjoyed the fencing class anyway...) or start rockclimbing at the barn. Clubs might be a good idea in general, actually - the people in the go club were not stupid when I was there and it was nice hanging out in Java Wally's.
0sparkles11y
4JohnWittle11y
It sounds like you have some extremely strong Ugh Fields. It works like this: A long, long time ago, you had an essay due on Monday and it was Friday. You had the thought, "Man, I gotta get that essay done", and it caused you a small amount of discomfort when you had the thought. That discomfort counted as negative feedback, as a punishment, to your brain, and so the neural circuitry which led to having the thought got a little weaker, and the next time you started to have the thought, your brain remembered the discomfort and flinched away from thinking about the essay instead. As this condition reinforced itself, you thought less and less about the paper, and then eventually the deadline came and you didn't have it done. After it was already a day late, thinking about it really caused you discomfort, and the flinch got even stronger; without knowing it, you started psychologically conditioning yourself to avoid thinking about it. This effect has probably been building in you for years. Luckily, there are some immediately useful things you can do to fight back. Do you like a certain kind of candy? Do you enjoy tobacco snuff? You can use positive conditioning on your brain the same way you did before, except in the opposite direction. Put a bag of candy on your desk, or in your backpack. Every time you think about an assignment you need to do, or how you have some job applications to fill out, eat a piece of candy. As long as you get as much pleasure out of the candy as you get pain out of the thought of having to do work, the neural circuitry leading to the thought of doing work will get stronger, as your brain begins to think it is being rewarded for having the thought. It doesn't take long at all before the nausea of actually doing work is entirely gone, and you're back to being just "lazy". But at this point, the thought of doing work will be much less painful, and the candy (or whatever) reward will be much stronger. All you have to do is trick your brain
-2sparkles11y
3MixedNuts11y
Oh hey, you're girl!me. Maybe what helped me will help you? Getting on bupropion stopped me being miserable and hurting all the time, and allowed me to do (some) stuff and be happy. That let me address my executive function issues and laziness; I'm not there yet, but I'm setting up a network of triggers that prompt me to do what I need. This will hurt like a bitch. When you get to a semi-comfortable point you just want to stop and rest, but if you do that you slide back, so you have to push through pain and keep going. But once the worst is over and you start to alieve that happiness is possible and doing things causes it, it gets easier. So I'd advise you to drag yourself to a psychiatrist (or perhaps a therapist who can refer you) and see what they can do. If you want friends and/or support, you could drop by on #lesswrong on Freenode, it's full of cool smart people. If I can help, you know where to find me.
0sparkles11y
0MixedNuts11y
I showed up at the doctor's during drop-in hours. I was "voluntarily" admitted to the hospital, put on fluoxetin (Prozac), and discharged a few days later. After some months, it became clear Prozac was making me worse. Since my depression is the atypical subtype (low motivation, can become happy by doing things, oversleeping, overeating), they switched me to bupropion (Wellbutrin). That worked. Doctors (or at least these particular doctors) know their stuff, but I double-check everything on Crazy Meds.
3TheOtherDave11y
What would help?
0sparkles11y
2Strange711y
What worked for me in a related situaton was leveraging comparative advantage by: 1) Finding somebody who isn't broken in the same specific way, 2) Providing them with something they considered valuable, so they'd have reason to continue engaging, 3) Conveying information to them sufficient to deduce my own needs, 4) Giving them permission to tell me what to do in some limited context related to the problem, 5) Evaluating ongoing results vs. costs (not past results or sunk costs!) and deepening or terminating the relationship accordingly. None of these steps is trivial; this is a serious project which will require both deep attention and extended effort. The process must be iterated many times before fully satisfactory results can reasonably be expected. It's a very generalized algorithm which could encompass professional counseling, romance, or any number of other things.
0sparkles11y
2Strange711y
Given that you're abnormally intelligent, you probably need less information to deduce any given thing than most people would. The flip side of that is, other people need more information than you think they will, especially on subjects you've studied extensively (such as the inside of your own mind). Given that you haven't figured out the problem yourself yet, they probably also need more information than you currently have. You might be able to save yourself some trouble (not all of it, but every little bit counts) on research and communication in step #3 by aiming step #1 at people who've already studied the general class of problem in depth. Does RIT have a psych department? Make friends with some of the students there and they'll probably give you a long list of bad guesses (each of which is a potential lead on the actual problem) for free. Given that you're trans, you probably also have an unusually good idea of what you want. Part of the difficulty of step #2 is that other people cannot be counted on to be fully aware of, let alone adequately explain, their own desires. If your introspection is chewing itself bloody, maybe it just needs a metaphorical bite block. Does RIT have a group of people who get together for tabletop roleplaying games? Those are going to be big soon. http://thealexandrian.net/wordpress/24656/ The goal is to connect with people who will, for one reason or another, help you without being asked, such that the help will keep coming even while you are unable to ask. They don't necessarily need to do it consciously, or in a way that makes any sense. What exactly do you mean by "writing?"
1David Althaus11y
You could start or attend a lesswrong meetup, maybe you'll find some like-minded people. Or talk to some of your professors, some of them should be pretty smart. Maybe also try meeting new folks, maybe older students? Go to okcupid, search for lesswrong, yudkowsky or rationality and meet some like-minded people. You don't have to date them. I know, it's pretty hard, I myself don't click with 99,9% of all people and I'm definitely under +3 sigma.
0sparkles11y
1Endovior11y
I think I understand. There is something of what you describe here that resonates with my own past experience. I myself was always much smarter than my peers; this isolated me, as I grew contemptuous of the weakness I found in others, an emotion I often found difficult to hide. At the same time, though, I was not perfect; the ease at which I was able to do many things led me to insufficient conscientiousness, and the usual failures arising from such. These failures would lead to bitter cycles of guilt and self-loathing, as I found the weakness I so hated in others exposed within myself. Like you, I've found myself becoming more functional over time, as my time in university gives me a chance to repair my own flaws. Even so, it's hard, and not entirely something I've been able to do on my own... I wouldn't have been able to come this far without having sought, and received, help. If you're anything like me, you don't want to seek help directly; that would be admitting weakness, and at the times when you hurt the worst, you'd rather do anything, rather hurt yourself, rather die than admit to your weakness, to allow others to see how flawed you are. But ignoring your problems doesn't make them go away. You need to do something about them. There are people out there who are willing to help you, but they can't do so unless you make the first move. You need to take the initiative in seeking help; and though it will seem like the hardest thing you could do... it's worth it.
-2sparkles11y
-10Kawoomba11y

It really feels good to be here. The name along sounds comforting..... 'less wrong'. I've always loved to be around people who write and provide of intuitive solutions to everyday challenges. Guess am gonna read a few posts and get acquainted to the customs here then make meaningful contributions too.

Thanks Guys for this great opportunity.

Hi! I'm shard. I have been looking for a community just like this for quite awhile. Someone on the Brain Workshop group recommended this site too me. It looks great, I am very excited to sponge as much knowledge off as I can, and hopefully to add a grain someday.

I love the look of the site. What forum or bb do you use? or is it a custom one? I've never seen one like it, it's very clean, and I'd like to use it for a forum I wanted to start.

1Alicorn11y
The software behind the site is a clone of Reddit, plus some custom development.
0shardfilterbox11y
Well very good job, it looks excellent. Much cleaner and easier on the eyes.
0MBlume11y
Less Wrong code

Greetings. My name is Albert Perrien. I was initially drawn to this site by my personal search on metacognition; and only really connected after having stumbled across “Harry Potter and the Methods of Rationality”, which I found an interesting read. My professional background is in computer engineering, database administration, and data mining, with personal studies of Machine Learning, AI and mathematics. I find the methods given here to promote rational thought and bias reduction fascinating, and the math behind everything enlightening.

Recently I’ve b... (read more)

It took me a few hours to find this thread like a kid rummaging through a closet not knowing what he is looking for.

As my handle indicates, I am Lloyd. Not much I think is worth saying about myself but I would like to ask a few questions to see what interests readers here, if anyone reads this, and present a sample of where my thinking may come from.

Considering the psychological model of five senses we are taught since grade school is there a categorical difference in our ability to logically perceive that 2+2=4 vs perceiving the temperature is decreasi... (read more)

4Mitchell_Porter12y
This was the hardest of your questions to get a grip on. :-) You mention disaster fiction, Star Trek, 1984, and Brave New World, and you categorize the first two as post-industrial and the second two as bad-industrial perpetuated. If I look for the intent behind your question... the idea seems to be that visions of the future are limited to destruction, salvation from outside, and dystopia. Missing from your list of future scenarios is the anodyne dystopia of boredom, which doesn't show up in literature about the future because it's what people are already living in the present, and that's not what they look for in futurology, unless they are perverse enough to want true realism even in their escapism, and experienced enough to know that real life is mostly about boredom and disappointment. The TV series "The Office" comes to mind as a representation of what I'm talking about, though I've never seen it; I just know it's a sitcom about people doing very mundane things every day (like every other sitcom) - and that is reality. If you're worried that reality might somehow just not contain elements that transcend human routine, don't worry, they are there, they pervade even the everyday world, and human routine is something that must end one day. Human society is an anthill, and anthills are finite entities, they are built, they last, they are eventually destroyed. But an anthill can outlive an individual ant, and in that sense the ant's reality can be nothing but the routine of the anthill. Humans are more complex than ants and their relation to routine is more complex. The human anthill requires division of labor, and humans prepared to devote themselves to the diverse functional roles implied, in order to exist at all. So the experience of young humans is typically that they first encounter the boredom of human routine as this thing that they never wanted, that existed before them, and which it will be demanded that they accept. They may have their own ideas about
0lloyd12y
I think you got a grip on the gist. I didn't mention boredom in my question but you went straight to where I have been in looking at the topic. But I do not think there is reason to believe boredom is a basic state of human life indicative of how it has always been. I think it may be more related to the industrial lifestyle. Take the 2012 Mayan calendar crap. Charles Mann concludes his final appendix in "1491" with a mention of the pop-phenom, "Archaeologists of the Maya tend to be annoyed by 2012 speculation. Not only is it mistaken, they believe, but it fundamentally misrepresents the Maya. Rather than being an example of native wisdom, scholars say, the apocalyptic 'prophecy' is a projection of European values onto non-European people." The apocalypse is the end of boredom for a bored people. I personally do not like the boring, as you suggested, I have come to grips with that and live accordingly.
-2Mitchell_Porter12y
Don't tell anyone, but I'm not immune to 2012-ism myself. At the very least, that old Mayan calendar is one of the more striking intrusions of astronomical facts into human culture; it seems to be built around Martian and Venusian cycles, and the precession of Earth.
1lloyd12y
So part of being new here...the karma thing. Did you just get docked karma for the assertion you are into 2012-ism? I didn't do it. Is there a list of taboos? I got docked for a comment on intuition (I speculate that is why).
2TheOtherDave12y
There's no list. In general, people downvote what they want to see less of on the site, and upvote what they want to see more of. A -1 score means one more person downvoted than upvoted; not generally worth worrying about. My guess is someone pattern-matched MP's comment to fuzzy-headed mysticism.
0lloyd12y
The idea of 'what you want to see less of' is fairly interesting. On a site dedicated to rationality I was expecting that one would want to see: -the discussion of rationality explicitly = the Sequences -examples of rationality in addressing problems -a distinction between rationality and other thinking processes and when rational thinking is appropriate (ie- the boundaries of rationality) It would be a reasonable hypothesis - based on what I have seen - that the last point causes a negative feedback. MP demonstrated a great deal of rationality (and knowledge) in addressing the questions I raised in the first post. Given this, I find it intriguing that he is captivated in any way by 2012ism. Anyway, I would expect upvotes for any comment that clarifies or contributes to the parent, downvotes for comments which obscure, and nothing for humor or personal side notes (they can generate productive input and help create an atmosphere of camaraderie). I saw the thread on elitism somewhere and noted that the idea of elitism and the karma system are intertwined. It seems a simple explicit description of karma and what it accomplishes may be a good thread for a top member to start. - if it exists already I was implying I sought it in my request for a 'list of taboos'. It may or may not be a good idea to tell people criteria for up/down-voting, but is there a discussion about that?
2TheOtherDave12y
Different people want to see, and want to avoid seeing, different things. The net karma score of any given comment is an expression of our collective preferences, filtered extremely noisily through which subset of the site happens to read any given comment. I would prefer LW not try to impose voting standards beyond "upvote what you want, downvote what you don't want." If we want a less croudsourced value judgment, we can pay someone we trust to go through and rate all the comments, though I would not contribute to that project.
0shminux12y
Or something they disagree with strongly enough. Or if they dislike the poster. Some just press a wrong button. Some have cats walking on keyboards. If you get repeatedly downvoted to -3 or so, then there is a cause for concern.
3Mitchell_Porter12y
Since life is considered a solved problem by science, any remaining problem of "aliveness" is treated as just a perspective on or metaphor for the problem of consciousness. But talking about aliveness has one virtue; it militates against the tendency among intellectuals to identify consciousness with intellectualizing, as if all that is to be explained in consciousness is "thinking" and passive "experiencing". The usual corrective to this is to talk about "embodiment". And it's certainly a good corrective; being reminded of the body reintroduces the holism of experience, as well as activity, the will, and the nonverbal as elements of experience. Still, I wouldn't want to say that talking about bodies as well as about consciousness is enough to make up for the move from aliveness to consciousness as the discursively central concept. There's an inner "life" which is also obscured by the easily available ways of talking about "states of mind"; and at the other extreme, being alive is also suggestive of the world that you're alive in, the greater reality which is the context to all the acting and willing and living. This "world" is also a part of cognition and phenomenology that is easily overlooked if one sticks to the conventional tropes of consciousness. So when we talk about a living universe, we might want to keep all of that in mind, as well as more strictly biological or psychological ideas, such as whether it's like something to be a star, or whether the states and actions of stars are expressive of a stellar intentionality, or whether the stars are intelligences that plan, process information, make choices, and control their physical environment. People do exist who have explored these ways of thought, but they tend to be found in marginal places like science fiction, crackpot science, and weird speculation. Then, beyond a specific idea like living stars, there are whole genres of what might be called philosophical animism and spiritual animism. I think pon
0lloyd12y
That is an impressive collection of links you put together. You have provided what I was looking for in a greater scope than I expected. The Star Larvae Hypothesis and Guy Murchie express the eccentricity in thought I was hoping someone would have knowledge of. I like to see the margins, you see. How did you come to all those tidbits? It took me a single question on this forum for me to get that scope and for that I owe you some thanks. I really do not have much of a hobby in pondering the intentions of stellar beings, but in coming up with queries that help me find the edges, margins, or whatever of this evolved social consciousness I am part of. I do find it interesting that someone would be able to compile those links. Was this a personal interest of yours at some time or part of a program of study you came across? Or do you have some skill at compiling links that is inexplicable?
0Mitchell_Porter12y
It's a bit of both.
1Mitchell_Porter12y
Whether there is a "logic-sense" is a question about consciousness so fundamental and yet so hard that it's scarcely even recognized by science-friendly philosophy of mind. Phenomenologists have something to say about it because they are just trying to characterize experience, without concern for whether or how their descriptions are compatible with a particular scientific theory of nature. But if you look at "naturalist" philosophers (naturalism = physicalism = materialism = an intent that one's philosophy should be consistent with natural science), the discussion scarcely gets beyond the existence of colors and other "five-sense" qualities. The usual approach is to talk as if a conscious state is a heap of elementary sense-qualia, somehow in the same way that a physical object could be a pile of atoms. But experience is about the perception of form as well, and this is related to the idea of a logic-sense, because logic is about concepts and abstract properties, and the properties of a "form" have an abstractness about them, compared to the "stuff" that the form is made from. In the centuries before Kant and Husserl, there was a long-running philosophical "problem of universals", which is just this question of how substance and property are related. How is the greenness in one blade of grass, related to the greenness in another blade of grass? Suppose it were the exact same shade of green. Is it the same thing, mysteriously "exemplified" in two different places? If you say yes, then what is "exemplification" or "instantiation"? Is it a new primitive ontological relation? If you say no, and say that these are separate "color-instances", you still need to explain their sameness or similarity. With the rise of consciousness itself as a theme of human thought, the problem has assumed a new character, because now the greenness is in the observer rather than in the blade of grass. We can still raise the classic questions, about the greenness in one experience and the
0lloyd12y
Thanks for addressing all three of the questions. Your ability to expound on such a variety of topics is what I was hoping someone in this forum could do. Quite insightful.
0DaFranker12y
Hello! Welcome to LessWrong! This post reads very much like a stream-of-consciousness dump (the act of writing everything that crosses your mind as soon as you become aware that you're thinking it, and then just writing more and more as more and more thoughts come up), which I've noticed is sometimes one of those non-rules that some members of the community look upon unfavorably. Regarding your questions, it seems like many of them are the wrong question or simply come from a lack of understanding in the relevant established science. There may also be some confusion regarding words that have no clear referent, like your usage of "realness". Have you tried replacing "realness" with some concrete description of what you mean, in your own mind, before formulating that question? If you haven't, then maybe it's only a mysterious word that feels like it probably means something, but turns out to be just a word that can confuse you into thinking of separate things as if they were the same, and make it appear as if there is a paradox or a grand mysterious scientific question to answer. Overall, it seems to me like you would greatly benefit from learning the cognitive science taught/discussed in the Core Sequences, particularly the Reductionism and Mysteriousness ones, and the extremely useful Human's Guide to Words (see this post for a hybrid summary / table of contents). Using the techniques taught in Reductionism and the Guide to Words is often considered essential to formulating good articles on LessWrong, and unfortunately some users will disregard comments from users that don't appear to have read those sequences. I'd be happy to help you a bit with those questions, but I won't try to do so immediately in case you'd prefer to find the solutions on your own (be it answers or simply dissolving the questions into smaller parts, or even noticing that the question simply goes away once the word problems are taken away).
1lloyd12y
I will tend to violate mores, but I do not wish to seem disrespectful of the culture here. In the future I will more strictly limit the scope of the topic, but considering it was an introduction...I just wished to spread out questions from myself rather than trivia about myself. I don't think I am asking the wrong question. Such is the best reply I can formulate against the charge. As for my understanding of the established science, I thought I was reasonably versed, but in such a forum as this I am highly skeptical of my own consumption of available knowledge. But from experience, I am usually considered knowledgeable in fields of psychology I am familiar with the textbook junk like Skinner, Freud, Jung, etc.. and with,e.g., Daniel Dennett, Aronson, and Lakoff , but that doesn't make me feel more or less qualified about asking the question I proposed. In astoronomy I have gone through material ranging from Chandrasekhar to Halton Arp, and the view that the stars are subject to, rather than direct gravitational phenomena is prevalent, i.e., stars act like rocks and not like living beings. Please elaborate on how 'realness' is unclear in its usage. I would like to know the more acceptable language. The concept is clear in my mind and I thought the diction was commonly accepted. If the subjects I have brought up are ill-framed then I would be happy to be directed to the more encompassing discussion. I have browsed much of what you directed me to. The structure of this site is a bit alien to my cognitive organization, but the material contained within is highly familiar. Please help me with the questions.
0DaFranker12y
Alright, let's start at the easy part concerning those questions: Yes. In a large set of possible categorical distinctions, they are in different categories. The true, most accurate answer is that they are not exactly the same. This was obvious to you before you even formulated the question, I suspect. They are at slightly different points in the large space of possible neural patterns. Whether they are "in the same category" or not depends on the purposes of the category. This question needs to be reduced, and can be reduced in hundreds of ways from what I see, depending on whether you want to know about the source of the information, the source of our cognitive identification of the information/stimuli, etc. "Sight" is a large mental paintbrush handle for a large process of input and data transfer that gets turned into stimuli that gets interpreted that gets perceived and identified by other parts of the brain and so on. It is a true, real physical process of quarks moving about in certain identifiable (though difficultly so) patterns in response to an interaction of light with (some stuff, "magic", I don't know enough about eye microbiology to say how exactly this works). Each step of the process has material reality. If you are referring to the "experience"-ness, that magical something of the sense that cannot possibly exist in machines which grants color-ness to colors and image-ness to vision and cold-ness and so forth, you are asking a question about qualia, and that question is very different and very hard to answer, if it does really need an answer at all. By contrast, it is experimentally verifiable - there is an external referent within reality - that two "objects" put with two "objects" will have the same spacetime configuration as four "objects". There is a true, real physical process by which light reflected on something your mind counts as "four objects" is the exact same light that would be reflected if your mind counted the same objects as "two
2lloyd12y
Thanks for clarifying. I understand that categories are mental constructs which facilitate thinking , but do not themselves occur outside the mind. The question meant to find objections to the categorization of logic as a sense. Taken as a sense there is a frame, the category, which allows it to be viewed as analogous to other senses and interrelated to the thinking process as senses are. In the discussion concerning making the most favorable choice on Monty Hall the contestant who does not see the logical choice is "blind". When considering the limits of logical reason they can be be seen to possibly parallel the limits of visual observation- how much of the universe is impervious to being logically understood? No need to address qualia. Will try to constrain myself to more concise, well-defined queries and comments.
-2chaosmosis12y
Hiya! I don't think there's a difference between the human sense of logic and the other senses, I agree with you there. Just as it's impossible to tell whether or not you're a brain in a vat, it's also impossible to tell whether or not you're insane. Every argument you use to disprove the statement will depend on the idea that your observations or thought processes are valid, which is exactly what you're trying to prove, and circular arguments are flawed. This doesn't mean that logic isn't real, it just means that we can't interpret the world in any terms except logical ones. The logical ones might still be right, it's just that we can never know that. You might enjoy reading David Hume, he writes about similar sorts of puzzles. It doesn't matter whether or not logic works, or whether reality is really "real". Regardless of whether I'm a brain in a vat, a computer simulation, or just another one of Chuang Tzu's dreams, I am what I am. Why should anyone worry about abstract sophistries, when they have an actual life to live? There are things in the world that are enjoyable, I think, and the world seems to work in certain ways that correspond to logic, I think, and that's perfectly acceptable to me. The "truth" of existence, external to the truth of my everyday life, is not something that I'm interested in at all. The people I love and the experiences I've had matter to me, regardless of what's going on in the realm of metaphysics. I don't quite understand what you're saying about vitalism. I don't know what the word "life" means if it starts to refer to everything, which makes the idea of a universe where everything is alive seem silly. There's not really any test we could do to tell whether or not the universe is alive, a dead universe and an alive one would look and act exactly the same, so there's no reason to think about it. Using metaphors to explain the universe is nice for simplifying new concepts, but we shouldn't confuse the metaphor for the universe itse
2TheOtherDave12y
Well, it's possible to tell that I'm insane in particular ways. For example, I've had the experience of reasoning my way to the conclusion that certain of my experiences were delusional. (This was after I'd suffered traumatic brain damage and was outright hallucinating some of the time.) For example, if syndrome X causes paranoia but not delusions, I can ask other people who know me whether I'm being paranoid and choose to believe them when they say "yes" (even if my strong intuition is that they're just saying that because they're part of the conspiracy, on the grounds that my suffering from syndrome X is more likely (from an outside view) than that I've discovered an otherwise perfectly concealed conspiracy. It's also possible to tell that I'm not suffering from specific forms of insanity. E.g., if nobody tells me I'm being paranoid, and they instead tell me that my belief that I'm being persecuted is consistent with the observations I report, I can be fairly confident that I don't suffer from syndrome X. Of course, there might be certain forms of insanity that I can't tell I'm not suffering from.
0chaosmosis12y
The forms of insanity that you can't tell if you're suffering from invalidate your interpretation that there are specific kinds of insanity you can rule out, no? Mainly though, I was aware that the example had issues, but I was trying to get a concept across in general terms and didn't want to muddle my point by getting bogged down in details or clarifications.
0TheOtherDave12y
I'm not sure exactly what you mean by invalidating my interpretation. If you mean that, because there are forms of insanity I can't tell if I'm suffering from, there are therefore no forms of insanity that I can rule out, then no, I don't think that's true. And, please don't feel obligated to defend assertions you don't endorse upon reflection.
0chaosmosis12y
Why not?
0TheOtherDave12y
Well, for example, consider a form of insanity X that leads to paranoia but is not compatible with delusion. Suppose ask a randomly selected group of psychologists to evaluate whether I'm paranoid and they all report that I'm not. Now I ask myself, "am I suffering from X?" I reason as follows: 1. Given those premises, if I am paranoid, psychologists will probably report that I'm paranoid. 2. If I'm not delusional and psychologists report I'm paranoid, I will probably experience that report. 3. I do not experience that report. 4. Therefore, if I'm not delusional, psychologists probably have not reported that I'm paranoid. 5. Therefore, if I'm not delusional, I'm probably not paranoid. 6. If I suffered from X, I would be paranoid but not delusional. 7. Therefore, I probably don't suffer from X. Now, if you want to argue that I still can't rule out X, because that's just a probabilistic statement, well, OK. I also can't rule out that I'm actually a butterfly. In that case, I don't care whether I can rule something out or not, but I'll agree with you and tap out here. But if we agree that probabilistic statements are good enough for our purposes, then I submit that X is a form of insanity I can rule out. Now, I would certainly agree that for all forms of insanity Y that cause delusions of sanity, I can't rule out suffering from Y. And I also agree that for all forms of insanity Z that neither cause nor preclude such delusions, I can't rule out suffering from (Z AND Y), though I can rule out suffering from Z in isolation.
0chaosmosis12y
But how would a possibly insane person determine that insanity X is a possible kind of insanity? Or, how would they determine that the Law of Noncontradiction is actually a thing that exists as opposed to some insane sort of delusion? I was talking about how we should regard unknowable puzzles (ignore them, mostly), like the butterfly thing, so I thought it was clear that I've been speaking in terms of possibilities this entire time. Obviously I'm not actually thinking that I'm insane. If I were, that'd just be crazy of me. Also, this approach presumes that your understanding of the way probabilities work and of the existence of probability at all is accurate. Using the concept of probability to justify your position here is just a very sneaky sort of circular argument (unintentional, clearly, I don't mean anything rude by this).
0TheOtherDave12y
Perhaps they couldn't. I'm not sure what that has to do with anything. Sure. If I'm wrong about how probability works, then I might be wrong about whether I can rule out having X-type insanity (and also might be wrong about whether I can rule out being a butterfly).
0chaosmosis12y
I didn't think that your argument could function on even a probabilistic level without the assumption that X-insanity is an objectively real type of insanity. On second thought, I think your argument functions just as well as it would have otherwise.
1TheOtherDave12y
If it's not an objectively real type of insanity, then I can certainly rule out the possibility that I suffer from it. If it is, then the assumption is justified.
-2lloyd12y
Thanks for the welcome. I raised this pov of logic (reason or rationality when applied) because I saw a piece that correlates training reason with muscle training. If logic is categorical similar to a sense then treat it metaphorically as such, I think. Improving one's senses is a little different than training a muscle and is a more direct simile. Then there is the question of what is logic sensing? Sight perceives what we call light, so logic is perceiving 'the order' of things? The eventual line of thinking starts questioning the relationship of logic to intuition. I advocate the honing of intuition, but it is identical in process to improving one's reason. The gist being that intuition picks up on the same object that logic eventually describes, like the part of vision which detects movement in the field that is only detailed once the focal point is moved upon it. As for vitalism, the life I speak of is to extend one's understanding of biological life - a self-directing organism - to see stars as having the same potential. The behavior of stars, and the universal structure is constrained in the imagination to be subject to the laws of physics and the metaphor for a star in this frame is a fire, which is lit and burns according to predictable rules regarding combustion. The alternative is to imagine that the stars are the dogs, upon which the earth is a flea, and we are mites upon it. Why does this matter? I suppose it is just one of those world-view things which I think dictates how people feel about their existence. "We live in a dead universe subject to laws external to our being" predicates a view which sees external authority as natural and dismisses the vitality within all points which manifest this 'life'. I think the metaphor for the universe is closely tied to the ethos of the culture, so I raised this question. Thanks for your thoughtful reply.
1Bugmaster12y
I'm not sure what you mean by "self-directing". As I see it, "life" is yet another physical process, like "combustion" or "crystallization" or "nuclear fusion". Life is probably a more complex process than these other ones, but it's not categorically different from them. An amoeba's motion is directed, to be sure, but so is the motion of a falling rock.
0lloyd12y
An amoeba acts on its environment where a rock behaves according to extrernal force. Life also has the characteristic of reproduction which is not how processes like combustion or fusion begin or continue. There are attempts to create both biological life from naught and AI research has a goal which could be characterized as making something that is alive vs a dead machine - a conscious robot not a car. I recognize that life is chemical processes, but I, and I think the sciences are divided this way, a categorical difference between chemistry and biology. My position is that physics and chemistry, eg, do not study a driving component of reality - that which drives life. If biological life is to be called >>complexity of basic chemical processes then what drives the level of complexity to increase? Is there a thread or some place where your position on life is expounded upon? If life is to be framed as a complex process on a spectrum of processes I could understand, provided the definition of complexity is made and the spectrum reflects observations. In fact, spectrums seem to me to be more fitting maps than categories, but I am unaware of a spectrum that defines complexity to encompass both combustion and life.
2Bugmaster12y
The rock acts on its environment as well. For example, it could hold up other rocks. Once the rock falls, it can dislodge more rocks, or strike sparks. If it falls into a river or a stream, the rock could alter its course... etc., etc. Living organisms can affect their environments in different ways, but I see this as a difference in degree, not in kind. Why is this important ? All kinds of physical processes proceed in different ways; for example, combustion can release a massive amount of heat in a short period of time, whereas life cannot. So what ? Are we talking about life, or consciousness ? Trees are alive, but they are not conscious. Of course, I personally believe that consciousness is just another physical process, so maybe it doesn't matter. Technically they do not, biology does that (by building upon the discoveries of physics and chemistry), but I'm not sure why you think this is important. I don't think that complexity of living organisms always increases. Well, you could start with those parts of the Sequences that deal with Reductionism . I don't agree with everything in the Sequences, but that still seems like a good start.
0chaosmosis12y
I don't believe that even biological life is self-directing. Additionally, I don't understand how extending one's understanding of biological life to everything can even happen. If you expand the concept of life to include everything then the concept of life becomes meaningless. Personally, whether the universe is alive, or not, it's all the same to me. When you say that this behavior is "constrained in the imagination", you're not trying to imply that we're controlling or maintaining those constraints with our thoughts in any way, are you? That doesn't make sense because I am not telekinetic. How would you know that what you're saying is even true, as opposed to some neat sounding thing that you made up with no evidence? What shows that your claims are true? If this is just an abstract metaphor, I've been confused. If so, I would have liked you to label it differently. I don't understand why vitalism would make the universe seem like a better place to live. I'm also reluctant to label anything true for purposes other than its truth. Even if vitalism would make the universe seem like a better place to live, if our universe is not alive, then it doesn't make sense to believe in it. Belief is not a choice. If you acknowledge that the universe isn't alive then you lose the ability to believe that the universe is alive, unless you're okay with just blatantly contradicting yourself. I don't understand why you think determinism is bad. I like it. It's useful, and seems true. You say that your view says that life is the source of the way things behave. Other than the label and the mysteriousness of its connotations, what distinguishes this from determinism? If it's not determinism, then aren't you just contending that randomness is the cause of all events? That seems unlikely to me, but even if it is the case, why would viewing people as controlled by "life" and mysterious randomness be a better worldview than determinism? I prefer predictability, as it's a prerequisi

Hi all! I'm Leonidas and I was also a lurker for quite some time. I forget how I exactly found Less Wrong but most likely is via Nick Bostrom's website, when I was reading about anthropics about a year ago. I'm an astronomer working on observational large-scale structure and I have a big interest in all aspects of cosmology. I also enjoy statistics, analyzing data and making inferences and associated computational techniques.

It was only during the final year of my undergraduate studies in physics that I consciously started to consider myself a rationalist... (read more)

I am Erik Erikson. By day I currently write patents and proofs of concept in the field of enterprise software. My chosen studies included neuro and computer sciences in pursuit of the understanding that can produce generally intelligent entities of equal to or greater than human intelligence less our human limitations. I most distinctly began my "rationalist" development around the age of ten when I came to doubt all truth, including my own existence. I am forever in debt to the "I think, therefore I am" idiom as my first piece of k... (read more)

Hello everyone. I've been lurking around this site for a while now. I found this site from HPMOR, as I'm sure a lot of people have. The fanfic was suggested to me by one of my friends who read it.

Random cliffnotes about myself. I'm a highschool senior. I'm a programmer, been programming since I was 10, it's one of my favorite things to do and it's what I plan on doing for my career. I love reading, which I would imagine is a given to most people here. I've always been interested in how the universe and people work, and I want to know the why of everything I can.

Hello, I found LessWrong through a couple of Tumblr posts. I don't really identify as a rationalist but it seems like a sane enough idea. I look forward to figuring out how to use this site and maybe make some contributions. I found reading some of the sequences interesting, but I think I might just stick to the promoted articles. As of now I have no plans on figuring out the Bayes thing, although I did give it a try. My name is Andrew.

Hello everyone

I've been lurking here for a while now but I thought it was about time I said "Hi".

I found Less Wrong through HPMOR, which I read even though I never read Rowling's books.

I'm currently working my way through the Sequences at a few a day. I'm about 30% through the 2006-2010 collection, and I can heartily recommend reading them in time order and on something like Kindle on your iPhone. ciphergoth's version suited me quite well. I've been making notes as I go along and sooner or later there'll be a huge set of comments and queries ar... (read more)

We are currently undertaking a study on popular perceptions of existensial risk, our goal is to create a publicly accesible index of such risks, which may then be used to inform and catalyze comprehension through discussion generated around them.

If you have a few minutes, please follow the link to complete a brief, anonymous questionnaire - your input will be appreciated !

Survey Link : http://eclipsebureau-survey.questionpro.com/

Join us on Facebook: http://www.facebook.com/eclipse.bureau

Hi there community! My name is Dave. Currently hailing from the front range in Colorado, I moved out here after 5 years with a Chicago non-profit - half as executive director - following a diagnosis of Asperger Syndrome (four years after being diagnosed with ADHD-I). That was three years ago. Much has happened in the interim, but long story short, I mercilessly began studying what we call AS & anything related I could find. After a particularly brutal first-time experience with hardcore passive-aggressivism (always two sides to every situation, but it ... (read more)

0shminux11y
As long as you frame it as a question about your understanding of relativity, and about the validity of the relativity theory itself, sure, why not.
-1shaih11y
Hello and welcome to lesswrong, your goal to understanding time as the 4th dimension stuck out to me in that it reminded me of a post that i found beautiful and insightful while contemplating the same thing. timeless physics has a certain beauty to it that resonates to me much better then 4th dimensional time and sounds like something you would appreciate.
-1shminux11y
Sure does, but don't let yourself get tempted by the Dark Side. Beauty is not enough, it's the ability to make testable predictions that really matters. And Eliezer's two favorite pets, timeless physics and many worlds, fail miserably by this metric. Maybe some day they will be a stepping stone to something both beautiful and accurate.
0shaih11y
You have a very good point and have shown me something that I knew better and will have to keep an eye on closer for now on. That being said Beauty is not enough to be accepted into any realm of science but thinking about beautiful concepts such as timeless physics could increase the probability of thinking up an original testable theory that is true. In particular I'm thinking how the notion of absolute time slowed down the discovery of relativity while if someone were to contemplate the beautiful notion of relative time, relativity could have been found much faster.

Hello Michael and Amanda Connolly from Phoenix Arizona here! we are looking for like minded people in Arizona too start a meetup group with. We are working on A documentary on rational thinking! its Called Rated R for Rational

http://www.indiegogo.com/RatedR?a=1224097

shoot us off an Email if you live in Arizona!

Just joined. Into: Hume, Nietzsche, J.S. Mill, WIliam James, Aleister Crowley, Wittgenstein, Alfred Korzybski, Robert Anton Wilson, Paul K. Feyerabend, etc.... DeeElf

Hi, my name is Alex. I'm not that smart as ppl posting articles here. My ability to properly challenge the captcha only from 2nd attempt while registering here in LW proves this :) So I was learning math when being student, now working in IT. While typing this comment I've been thinking what is my purpose of spending time here and reading different info... and suddenly realized that i'm 29 already and life is too short to afford thinking wrong and thinking slow. So hope to improve myself to be able learn and understand more and more things. Cheers to everyone :)

[-][anonymous]11y10

Hey. I'd like to submit an article. Please upvote this comment so that I may acquire enogh points to submit it.

I am Alexander Baruta, High-school student currently in the 11th grade (grade 12 math and biology). I originally found the site through Eliezer's blog, I am (technically) part of the school's robotics team (someone has to stop them from creating unworkable plans), undergoing Microsoft It certification, and going through all of the psychology courses in as little time as possible (Currently enrolled in a self-directed learning school) so I can get to the stuff I don't already know. My mind is fact oriented, (I can remember the weirdest things with perfect c... (read more)

[This comment is no longer endorsed by its author]Reply
0Baruta0711y
Sorry about that, the internet connection I am using occasionally does this sort of thing.
0A1987dM11y
If you reload the page after you've retracted a comment, you can delete it. (Who came up with this awful interface, anyway?)
8Alicorn11y
It has been asked that we be gentle in word choice when critiquing the site. Tricycle works hard, and time spent working on LW is donated. You can submit bug reports or PM Matt if you think something has been overlooked or have a better idea.
[-][anonymous]11y00

A few years ago some of my friends and I were interested in futurism and how technology could make the world a better place, which brought us upon the topics of transhumanism and the Singularity. I was aware of LessWrong, but it wasn't until last year when I took a psychology course that I got really interested in reading the blog. Just over a year ago I started reading LessWrong more frequently. I read a lot of stuff about the Singularity, existential risk, optimal philanthropy, epistemology, and cognitive science, both here and lots of other places on th... (read more)

0[anonymous]11y
Other relevant information: * I'm currently a college student who is at a loss for what to study, and trying hard to switch into some STEM stream of one kind or another. My marks and general competence in most subject areas are pretty good. I'm looking to do something interesting (i.e., cool research, developing cool software, really neat chemistry/biotech/mechatronics development) or important (i.e., making lots of money to give away and/or make my life more fun). I am willing and in a very good position to take risks, so I could try more than one thing if it tickles my fancy. Input or advice on this topic is invited. * I'm a 2nd-generation nontheist, who has pretty much always been into skepticism and naturalism of some form or another, even as a child. However, reading LessWrong has opened my eyes to value of questioning my own opinions a lot more and warned me about the dangers of mindkilling. * I've suffer from lots of procrastination and akrasia. More generally, I have poor time and life management skills and habits, and have suffered episodes of depression in the last couple of years.. I'm currently on antidepressants, and working through CBT with a therapist to finally make some progress on these problems. This is the biggest source of irrationality in my life, and I hope that LessWrong Sequences (especially some of lukeprog's work) will help with this. Please suggest any other evidence-based approaches you think that will help me feel better, get better and stay better.
[-][anonymous]12y00

Hello...sorry, but I was hoping someone could msg me the location for the NYC meetup real quick, which is in two hours.

I am a new member and have been looking at Blogs for the first time over the past few weeks. I have written a book, finished last month, which deals with many of the issues about reasoning discussed at this site, but I attempt to cut through them somewhat, as there is so much potential in the facts out there to be ordered that I don't spend a lot of time considering the theory relating to my reasoning in providing some order to it in my book. I discuss reasoning, and many of the principles raised in posts here, but my interest is in reasonably framing the ... (read more)

1thomblake12y
Folks, a reminder that downvotes against introduction posts on the "Welcome" thread are frowned upon. There's nothing in the parent comment that should be sufficient to override that norm.
6wedrifid12y
Yes there is---the rest of the comments that also advertise the book while attempting to shame Vladimir out of downvoting him for allegedly sinister emotional reasons. Making that sort of status challenge can be a useful way to establish oneself (or so the prison myth goes) but also often backfires and also waives the 'be gentle with the new guy' privileges. People should consider themselves free to ignore thomblake's frowns and vote however they please in this instance. There is no remaining obligation to grant marcusmorgan immunity to downvotes.
0thomblake12y
I see two comments other than the above that "advertise" the book - actually link to it in a seemingly relevant context - and it's a free book even. The other comments aren't nearly as bad as you're making them out to be, and they were downvoted appropriately. Did I miss comments that were deleted / edited, or what? What was even a 'status challenge' in marcusmorgan's comments?
-2wedrifid12y
Exactly.
5DaFranker12y
I have suspicions that this introduction was downvoted because, on first reading, it feels like an advertising post filled with Applause Lights and other gimmicks (the feeling is particularly strong for me as I just finished reading the Mysterious Answers to Mysterious Questions sequence, though I had already read the majority of the individual posts in jumbled order). A second reading sufficed to dismiss the feeling for me, and upon randomly selecting five sentences that felt like gimmicks and estimating their intended meaning, it turns out that it wasn't so gimmicky at all. Even the word "emergence", given as a prime example of modern Mysterious Answer in many contexts, seems to have been used properly here. The oddity of the initial feeling of advertising and gimmickyness and how easily dispersed it was is enough to pique my curiosity, and I think I'll take some time to actually read that book now. Ironically, the only reason I even became aware of this post was seeing the reminder that downvoting was frowned upon in the recent comments. Heh.