Welcome to Less Wrong! (6th thread, July 2013)

21 Post author: KnaveOfAllTrades 26 July 2013 02:35AM
If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, how you came to identify as an aspiring rationalist or how you found us. You can skip right to that if you like; the rest of this post consists of a few things you might find helpful. More can be found at the FAQ.

 

A few notes about the site mechanics

To post your first comment, you must have carried out the e-mail confirmation: When you signed up to create your account, an e-mail was sent to the address you provided with a link that you need to follow to confirm your e-mail address. You must do this before you can post!

Less Wrong comments are threaded for easy following of multiple conversations. To respond to any comment, click the "Reply" link at the bottom of that comment's box. Within the comment box, links and formatting are achieved via Markdown syntax (you can click the "Help" link below the text box to bring up a primer).

You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it's part of what makes discussions on Less Wrong look different from those anywhere else on the Internet.

However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.) Take note when you're downvoted a lot on one topic, as it often means that several members of the community think you're missing an important point or making a mistake in reasoning— not just that they disagree with you! If you have any questions about karma or voting, please feel free to ask here.

Replies to your comments across the site, plus private messages from other users, will show up in your inbox. You can reach it via the little mail icon beneath your karma score on the upper right of most pages. When you have a new reply or message, it glows red. You can also click on any user's name to view all of their comments and posts.

It's definitely worth your time commenting on old posts; veteran users look through the recent comments thread quite often (there's a separate recent comments thread for the Discussion section, for whatever reason), and a conversation begun anywhere will pick up contributors that way.  There's also a succession of open comment threads for discussion of anything remotely related to rationality.

Discussions on Less Wrong tend to end differently than in most other forums; a surprising number end when one participant changes their mind, or when multiple people clarify their views enough and reach agreement. More commonly, though, people will just stop when they've better identified their deeper disagreements, or simply "tap out" of a discussion that's stopped being productive. (Seriously, you can just write "I'm tapping out of this thread.") This is absolutely OK, and it's one good way to avoid the flamewars that plague many sites.

EXTRA FEATURES:
There's actually more than meets the eye here: look near the top of the page for the "WIKI", "DISCUSSION" and "SEQUENCES" links.
LW WIKI: This is our attempt to make searching by topic feasible, as well as to store information like common abbreviations and idioms. It's a good place to look if someone's speaking Greek to you.
LW DISCUSSION: This is a forum just like the top-level one, with two key differences: in the top-level forum, posts require the author to have 20 karma in order to publish, and any upvotes or downvotes on the post are multiplied by 10. Thus there's a lot more informal dialogue in the Discussion section, including some of the more fun conversations here.
SEQUENCES: A huge corpus of material mostly written by Eliezer Yudkowsky in his days of blogging at Overcoming Bias, before Less Wrong was started. Much of the discussion here will casually depend on or refer to ideas brought up in those posts, so reading them can really help with present discussions. Besides which, they're pretty engrossing in my opinion.

A few notes about the community

If you've come to Less Wrong to  discuss a particular topic, this thread would be a great place to start the conversation. By commenting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.

If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation; we can fit many more welcomes onto each thread if fewer of them sprout 400+ comments. (To do this: click "Create new article" in the upper right corner next to your username, then write the article, then at the bottom take the menu "Post to" and change it from "Drafts" to "Less Wrong Discussion". Then click "Submit". When you edit a published post, clicking "Save and continue" does correctly update the post.)

If you want to write a post about a LW-relevant topic, awesome! I highly recommend you submit your first post to Less Wrong Discussion; don't worry, you can later promote it from there to the main page if it's well-received. (It's much better to get some feedback before every vote counts for 10 karma—honestly, you don't know what you don't know about the community norms here.)

Alternatively, if you're still unsure where to submit a post, whether to submit it at all, would like some feedback before submitting, or want to gauge interest, you can ask / provide your draft / summarize your submission in the latest open comment thread. In fact, Open Threads are intended for anything 'worth saying, but not worth its own post', so please do dive in! Informally, there is also the unofficial Less Wrong IRC chat room, and you might also like to take a look at some of the other regular special threads; they're a great way to get involved with the community!

If you'd like to connect with other LWers in real life, we have  meetups  in various parts of the world. Check the wiki page for places with regular meetups, or the upcoming (irregular) meetups page. There's also a Facebook group. If you have your own blog or other online presence, please feel free to link it.

If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter

A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.

A list of some posts that are pretty awesome

I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:

More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.

Welcome to Less Wrong, and we look forward to hearing from you throughout the site!

 

Once a post gets over 500 comments, the site stops showing them all by default. If this post has 500 comments and you have 20 karma, please do start the next welcome post; a new post is a good perennial way to encourage newcomers and lurkers to introduce themselves. (Step-by-step, foolproof instructions here; takes <180seconds.)

If there's anything I should add or update on this post (especially broken links), please send me a private message—I may not notice a comment on the post.

Finally, a big thank you to everyone that helped write this post via its predecessors!

Comments (513)

Comment author: Glen 25 July 2013 05:22:16PM 14 points [-]

Hello all, my name is Glen and I am a fairly long-time lurker here. I first found this site through the Sword of Good short story, and filed it in my "List of things I want to read but will never actually get around to" and largely forgot about it until I recognized the name while reading HPMOR. I've read most, but not all, of the sequences and am currently going through Quantum Mechnics. I'm Chicago based and work as a programmer for an advertising company. I consider myself a low-mid level rationalist and am working at getting better.

I run or play in a wide range of tabletop games, where I'm known as being a GM-Friendly Munchkin. That is to say, I like finding exploits and unusual combinations, but then I talk to the person running the game about them and usually explain why I shouldn't be allowed to do that. It lets me have fun breaking the system without actually making hte game less fun. I've also used basic information theory to great effect, unless the GM tells me to knock it off. Currently in love with Exalted. Been burned by Shadowrun in the past, but I just can't stay mad at her.

Comment author: hylleddin 25 July 2013 07:50:31PM *  4 points [-]

We're curious how you've used information theory in RPGs. It sounds like there are some interesting stories there.

Comment author: Glen 25 July 2013 08:25:59PM *  13 points [-]

The most interesting stories come from a power in Exalted called "Wise Choice". Basically, you give it a situation and a finite list of actions you could take and it tells you the one that will have the best outcome for you within the next month. It also requires a moderate expenditure of mana, so it can't be used over and over without cost. When I read what the charm did, I thought of Harry's time-experiment with prime numbers. It was immediately obvious that Wise Choice could factorize any number easily, although perhaps not cheaply if it has a large number of factors. From there, it also expanded to finding literally anything in the world either with one big question (if low on mana) or a quick series of smaller ones (if low on time) by dividing the world into a grid and either listing every square or doing a basic binary search via asking the power "Given that I'm going to keep divind the world in half and asking a similar question to this one, which half of the world should I focus on to get within 10 feet of Item/Person X's location at exactly 7PM tomorrow evening" I also figured out that you can beat the one month time limit by pre-committing to asking the same question in 27 days and having someone else promise to give you a reward if you state the same thing each month, with the caveat that you have to give it all back if you're proven wrong in the end or change your answer. This can be shown to work (assuming I haven't made a mistake) by taking a simple case of there being two boxes, one containing ten million dollars and the other being empty. By choosing a box now, it will be opened in six months and you will be given what is inside. Without the trick, Wise Choice looks forward one month, sees no difference and tells you "it doesn't matter". With the trick, Wise Choice looks forward a month, and tells you to say what it sees future you saying, even though it doesn't "understand" why. However, future you can see an additional month forward, and uses it to see future you+2, etc. Therefore, the first instance gives you the true box, even though it can't see to when the box opens.

Of course, it's possible that I've missed a possible case that makes those tricks invalid. I don't have access to an actual infinite-knowledge superpower to check my work, but I figure telling other people about it so they can see things I missed is almost as good.

Comment author: Ben_LandauTaylor 25 July 2013 05:46:54PM 15 points [-]

Hello again. I've been posting for a while as ModusPonies. As much as I like the old name, it's time to retire it. More and more, I'm interacting with the community in meatspace and via email. I'm switching to my real name so that people who know me in one context will recognize me in another.

Comment author: RichardKennaway 26 July 2013 10:34:15AM 11 points [-]

Hello again. I've been posting for a while as ModusPonies.

A bit late to say this, but: best username ever.

Comment author: Anders_H 28 July 2013 09:24:01PM 4 points [-]

My name is Anders. I have been lurking for a long time, and have attended meetups in Boston for the last three years. I recently began commenting more frequently. This is a new account; after discussing Ben's name change with him at the meetup today, I decided to switch to something closer to my real name, sacrificing my 20 karma points in the process.

I am 31 years old. I am a doctoral candidate in Epidemiology at the Harvard School of Public Health, where I work on some new implementations of causal models for comparative effectiveness research, particularly for screening interventions. I am originally from Norway. I attended medical school in Ireland, and worked for 18 months as a junior doctor in western Norway before moving to Boston.

On Less Wrong, I am particularly interested in the material on causality and decision theory. I am also interested in epistemic rationality and cognitive bias in general, and in the extent to which our actions are explained by signaling. In terms of mainstream philosophy, I see myself as formalist, falsificationist and prioritarian consequentialist. The "formalist" part is due to spending a year as an undergraduate student in mathematics; 12 years later, the only thing I retain from that year is a persistent belief that mainstream philosophy is underrating the importance of David Hilbert.

Comment author: So8res 25 July 2013 07:18:36PM *  49 points [-]

I'm Nate. I'm 23. My road here was a winding one.

I grew up as one of those "mathematically gifted" kids in a tiny rural town. I turned away from mathematics towards computer science (which I loved) and economics (which I decided I needed to understand if I wanted to save the world). I went on to became a software engineer at Google.

At the intersection of computer science and economics I fueled a strong belief that the world is broken and that we could do far better if we redesigned social structure from scratch, now that we have so much more knowledge & technology than we did when we created these antiquated governments. I despaired that most think progress entails playing the political tug of war instead of building a better system. I spent a long time refining my ideas.

In the interim I missed a number of opportunities to discover this site. In 2008 I stumbled across the Quantum Physics sequence on Overcoming Bias. I read it up till where it was still being written, then moved on. In 2010, I found HPMoR. I read it, noticed the links to this site, and poked around a little. Nothing came of it. I caught up to where HPMoR was being written, then put it out of my mind. I had more important things to do. I had big ideas to express, and I started writing them down.

At some point along the way I realized I needed more math. To my horror, I found that the math I had been so good at as kid was largely memorized, not deeply understood. I knew how to manipulate symbols like nobody's business, but I wouldn't have been able to re-invent the things I "knew" if you erased them from my mind. (In LW terms, I had memorized many passwords). I started going back through what I thought I knew and groking it.

During my journey, sometime early in 2012, I stumbled across the Quantum Physics sequence on LessWrong. From the summaries, it seemed like a good way to quickly evaluate how much of my QM knowledge was cached passwords and how much I had really learned. I started reading it and experienced a strong sense of deja vu. I figured out that LW was seeded by Overcoming Bias, experienced some nostalgia, put the feeling to rest, and moved on.

Relearning math and learning to write morphed into a more general quest to promote clear thinking and better methods of deduction with a long-term goal of bridging my pet inferential gap. As I researched and wrote, this one site kept popping up in my search results -- LessWrong.

Around the same time (late 2012) I heard about updates to HPMoR. I hadn't been following it for years, but I was suddenly reminded why the site felt so familiar. I'm not exactly sure how everything fell into place, but some combination LessWrong showing up in my research, a recollection that HPMoR was associated, and the remembered nostalgia from the Quantum Physics sequence all came together. I finally decided to see what this site was all about.

The rest is history. I tore through the sequences. Much of it was extremely validating: Mysterious Answers and Politics is the Mindkiller expressed much of what I had set out to say. I've always planned to cheat death. I attempted a similar dissolution of "free will" a few years back. The rest of it was largely epiphany porn.

The strongest epiphany came when I was introduced to the idea of UFAI. From my vantage point between economics and computer science, everything clicked. Hard.

I'd taken AI courses, but AI was a "centuries in the future" sort of vagary. My primary concern was with finding a way to "refactor" governments (and create meta-governments, as I do not claim to know the best way to run a society). To me, that was The Way To Save The World™ -- until I actually thought about UFAI.

I didn't need any convincing. I simply... hadn't considered it before. Upon first reflection, the scope of the problem became clear. I experienced panic, and not because UFAI is scary: overnight, my Way To Save The World was eclipsed by a threat that darkens the entire future.

It's hard to overstate how much my ideals motivate me. The AI problem shook me to my core. I'd ostensibly been trying to save the world, how could I miss something as obvious as UFAI? How could I take my ideals seriously if I'd misunderstood the problem so hard that I hadn't considered existential threats? In light of this new information, what should I really be doing to ensure a bright future?

I went into philosophical-panic reevaluate-everything mode. That was a few months ago. I've done a lot of reflection. I'm still a bit shaken. I have grand ideas about how we can get to a better social structure from here and a lot of inertial passion along those lines. I don't know nearly enough math. I feel like I'm late to the party, passionate but impotent. I'm trying to find a way to help beyond donating to MIRI. I feel outclassed here, which is probably a good thing. I'm working on getting stronger. I have a lot to do.

Hello!

Comment author: CoffeeStain 28 July 2013 10:54:04PM 5 points [-]

...

We need to talk more.

Comment author: So8res 29 July 2013 12:47:20PM 6 points [-]

Let's. I'm on the east coast until Aug 11. Perhaps we can meet up after work on the week of the 12th.

(Context for others: The two of us met briefly at a meetup in June and exchanged usernames, but haven't spoken much.)

Comment author: ESRogs 23 September 2013 08:15:10PM 2 points [-]

Do you have a recommendation for how to pronounce 'So8res'?

Comment author: Wes_W 25 July 2013 07:22:06PM 18 points [-]

Hello, Less Wrong! I'm Wes W., which username I've chosen as a compromise between anonymity and real-life-usability, since I do intend/hope to get involved in meatspace once my schedule permits.

I've been lurking here and working my way through the Sequences for a couple months now. I'm intentionally pacing myself, so I can process things sufficiently. (Also, it's mildly alarming to finish reading a post and find that my brain has already vented all previous opinions on the topic and replaced them with the writer's.) I don't really know anymore how I found this site, because I've been aware of its existence for a couple years, but only recently realized both the full extent of the material here, and that I wanted to be involved in it.

I've been an atheist for several years, following another several years of diminishing faith in my native Mormonism, but it wasn't until I started reading Eliezer that this felt like a good thing, rather than a loss.

I currently have a job as a math tutor, which I originally got as just a college summer job, but turned into an "oh, this is what I want to do with my life" thing, so I'm now working on becoming a teacher. So clarity of thought is especially helpful to me, since I have to know something backwards and forwards in my sleep before I can do much to help a student understand. Ideas like "guessing the teacher's password" and "how could I regenerate this knowledge, if I lost it" have been directly useful to me, and I also hope to get better at overcoming akrasia.

Comment author: Frood 25 July 2013 08:39:06PM 15 points [-]

Hi! HPMOR brought me here. I now spend about as much time telling people to read it as I do discussing the weather with them. I’ve read about half of the sequences. I lurked for a long time because I often find that getting involved in discussions blurs my ability to think objectively. Right now I’m working on a Litany Against Non-Participation, as well as taking gradual steps towards participating more, in an attempt to remedy this. I’m very interested in learning how to ask better questions.

I’m entering my fourth year of an interdisciplinary-or-is-it-multidisciplinary program at McMaster University in Hamilton, Ontario. Basically, I've chosen to focus my formal education on skill development (reasoning, writing, researching, etc.) instead of specialized content acquisition (that’s for my spare time).

For at least the last five years, I've been a philosophy-based thinker. Most of my courses were non-philosophy, but I took them to aid with my philosophical education. Sort of like how a guitar player might learn piano to improve their music theory and develop new musical ideas. I have a (very idealistic) vision for philosophy, one in which philosophy is the ‘highest’ discipline that makes space for only the most educated and able. In most cases, I think that philosophers should embrace scientific knowledge and methodology, and stop pointless quibbles about matters that they are not qualified to address. For instance, I'm quite frustrated by the lack of understanding of modern social psychology and sociology in political philosophy and ethics.

I've recently concluded that completing an undergraduate education in philosophy is not worth my time, and I totally agree with lukeprog’s diagnosis. Moving forward, I am going to attempt to transition into a science-based thinker. I’ll learn the same material, but to a different end. Maybe I’ll save philosophy later.

I'm very grateful to LW. I’m a better thinker than I was a year ago, and I've finally been able to shed some of the old beliefs that have been holding me back from reaching my potential as a rationalist. Feels good. Thanks y'all!

Comment author: falenas108 26 July 2013 02:09:27PM 5 points [-]

I lurked for a long time because I often find that getting involved in discussions blurs my ability to think objectively.

That's definitely true. But there is an advantage to posting. Often, I'll have an idea and start to write it out. But then, I realize that it's not quite up to my internal "less wrong standards." So, I'll start refining the idea, and end up with a much better one than I started with.

Or I'll find out that the idea isn't as good as I thought it was, and end up not posting.

Comment author: Antiochus 26 July 2013 03:44:01AM 14 points [-]

Hi. I'm a software engineer and history enthusiast. Been reading for years, and just recently got around to making an account. Still building up the courage to dive in, but this place has done wonders for reducing sloppy thinking on my part.

Comment author: telms 11 August 2013 05:01:22AM 0 points [-]

Hi, Antiochus. What areas of history are you interested in? I'm similarly interested in history -- particularly paleontology and archaeology, the history or urban civilizations (rise and collapse and reemergence), and the history of technology. I kind of lose interest after World War II, though. You?

Comment author: Antiochus 11 August 2013 06:40:11PM 0 points [-]

Any and all! Though I have a lot of interest in military history in particular, which lead me to wargaming, with some specialized interest in the Hellenistic period and the ancient world in general, medieval martial arts, and the black powder era of linear battles.

Comment author: telms 12 August 2013 05:10:32AM *  0 points [-]

Sad to say, my only experience with wargaming was playing Risk in high school. I'm not sure that counts.

Comment author: AnatoliP 26 July 2013 05:36:33AM *  8 points [-]

Hello, I stumbled upon LW a few months ago. Some of the stuff here I find extremely interesting. Really like the quality of the articles and discussions here. I studied math and engineering, currently working as a s/w developer, also very much interested in economics and game theory.

Cheers!

Comment author: pan 26 July 2013 01:20:03PM *  10 points [-]

Hey everyone, I'm 26, and a PhD candidate in theoretical physics (four years in, maybe two left). I've been reading LessWrong for years on and off but I put off participating for a long time, mainly because at first there was a lot of rationality specific lingo I didn't understand, and I didn't want to waste anyones time until I understood more of it.

I had always felt that things in life are just systems, and for most systems there are more and less efficient ways to do the same things. Which to me that is what rationality is, first seeing the system for what it actually is, and then tweaking your actions to better align with the actual rules of the system. So I began looking to see what other people thought about rationality, and eventually ended up here. I lurked for years, and finally made the first step towards involvement during the LW study hall, which I participated in for several weeks as notatest5 during my working hours.

I was accepted last year into one of the CFAR workshops with an offer for about 50% reduction in fees, but unfortunately for a graduate student it was still difficult for me to justify the cost when I am on a fixed income for the next few years and often spend exactly what I make each month. I would still like to attend in the future though, so hopefully once I graduate I will have the money and time. It will also help if some of the workshops are held on the east coast (where I live).

I've actually never read the quantum physics sequences, as I deal with quantum physics on a daily basis I didn't think I had much to gain. But as I look for places that I could contribute something to this site I think that could be one place that I have an advantage over others, if there is further interest in development of physics based sequences.

(Unimportant edit: the name pan is a reference to the Greek god, particularly in the book Jitterbug Perfume, in case anyone has read it.)

Comment author: Ben_LandauTaylor 26 July 2013 04:05:59PM 6 points [-]

It will also help if some of the workshops are held on the east coast

CFAR is holding a workshop in New York on November 1-4 (Friday through Monday).

Comment author: shminux 26 July 2013 05:02:10PM 0 points [-]

Just wondering what your area of research is.

I've actually never read the quantum physics sequences, as I deal with quantum physics on a daily basis I didn't think I had much to gain.

Eliezer's point is that his QM sequence, resulting in proclaiming MWI the one true Bayesian interpretation, is an essential part of epistemic rationality (or something like that), and that physicists are irrational at ignoring this. Not surprisingly, trained physicists, including yours truly, tend to be highly skeptical of this sweeping assertion. So I wonder if you ever give any thought to Many Worlds or stick with the usual "shut up and calculate"?

Comment author: pan 26 July 2013 05:13:19PM 4 points [-]

My research is in quantum optics and information, more specifically macroscopic tests of Bell's inequality and applications to quantum cryptography through things like the Ekert protocol.

I didn't realize that the quantum mechanics sequence here made such conclusions, thanks for pointing that out, maybe I'll check it out to see what he says. I've given some thought to many worlds, but not enough to be an expert, as my work doesn't necessitate it. From what I know, I'm not so convinced that many worlds is the correct interpretation, I think answers to the meaning of the wave function collapse will come more form decoherence mechanisms giving the appearance of a collapse.

Comment author: Halfwitz 26 July 2013 09:32:05PM *  3 points [-]

I think answers to the meaning of the wave function collapse will come more form decoherence mechanisms giving the appearance of a collapse.

Forgive my ignorance, but isn't that the official many-world's position - that decoherence provides each "you" with the appearance of collapse?

Comment author: shminux 26 July 2013 11:03:11PM 3 points [-]

Decoherence is a measurable physical effect and is interpretation-agnostic. "Each you" only appears in the MWI ontology. pan did not state anything about there being more than one copy of the observer as a result of decoherence.

Comment author: Halfwitz 27 July 2013 04:03:04AM 1 point [-]

That makes sense; are you a physicist, too?

Comment author: shminux 27 July 2013 04:42:09AM 0 points [-]

Trained, not practicing.

Comment author: ourimaler 26 July 2013 05:19:35PM 20 points [-]

Hello. I'm Ouri Maler, or "sun tzu" on some other forums; turning 29 in August.

I don't exactly remember when I started thinking of myself as a rationalist, but I know the core of my pro-science, pro-logic worldview was formed between the age of 8 and 10. For many years, I planned to be a physicist. In college, I studied to become a roboticist. And since that hasn't entirely panned out, I'm currently struggling to get employed as a programmer. I also write as a hobby, and I do try to reconstruct rationalism in my current urban fantasy story, "Saga of Soul".

Less Wrong has been on my "to check one of these days" list for a few years. It came to my attention again recently when Mr. Yudowsky recommended Saga of Soul on Facebook, prompting me to marathon HPatMoR over the past few days. I finished yesterday, and figured it was time to join the community and see what'll come of it.

Comment author: Alicorn 27 July 2013 07:08:42AM 5 points [-]

my current urban fantasy story, "Saga of Soul".

Oh hey, I have encountered this thing in the past and I think you have interacted with one of my beta readers and you promoted my friend Emily's Kickstarter. Hi!

Comment author: ourimaler 27 July 2013 08:08:40AM 3 points [-]

Hello! Unless I'm mistaken, you're the author of Hi to Tsuki to Hoshi no Tama? I used to read that.

Comment author: Alicorn 27 July 2013 06:45:44PM 3 points [-]

I am, yes, but I now consider all the webcomics I used to do embarrassing and would rather steer you towards my more recent prose, like Luminosity.

Comment author: [deleted] 27 July 2013 06:48:27PM 1 point [-]

Speaking of your recent prose, what's the update schedule on Goldmage?

Comment author: Alicorn 27 July 2013 06:50:02PM 2 points [-]

Goldmage is stalled due to plothole. (Basically, I thought I could write about goldmagic without doing any math, and this doesn't seem to be the case.) I don't have an ETA on fixing it. Elcenia is not suffering from that specific problem but my life in general is being eaten by a freeform roleplay thing I am doing that leaves me with this tendency to open story files, stare at them, and then close them.

Comment author: [deleted] 27 July 2013 06:52:50PM *  0 points [-]

Damn, that's too bad. I really thought it was a clever idea. And to end on a cliffhanger! Sigh.

Comment author: Alicorn 27 July 2013 07:33:07PM 1 point [-]

I haven't actually decided to abandon the story, it just needs math to happen and a significant part of my brain wants the math to happen via magic.

Comment author: [deleted] 27 July 2013 08:07:24PM 1 point [-]

I... understand? A significant part of my brain always wants math to happen via magic.

Sometimes it does! Sort of.

Comment author: ourimaler 27 July 2013 07:35:22PM 1 point [-]

Well, it's your call. But for what it's worth, I enjoyed HtTtHnT when it was running (particularly how the protagonists handled the loss of their secret identities).

Luminosity sounds like an interesting idea, though I'll confess I've never read any of the Twilight books...

Comment author: Alicorn 27 July 2013 07:37:12PM 1 point [-]

Luminosity requires no knowledge of nor affection for canon Twilight.

Comment author: Manfred 31 July 2013 01:22:24PM 1 point [-]

Well, you could always try reading the first few chapters and stop if you don't like it >:D

Comment author: Eliezer_Yudkowsky 27 July 2013 08:00:24AM 3 points [-]

Oh hey, welcome! Any magical girl who takes the time to view the Earth from space has my vote, but you already know that.

Comment author: ourimaler 27 July 2013 08:10:35AM 4 points [-]

Thank you! And thanks again for the link - I got around 250% as many unique views in the 48 following hours as I had in the entire preceding month.

Comment author: noahpocalypse 27 July 2013 12:31:29AM 15 points [-]

My name's Noah Caldwell, I am a lesser being who currently resides in rationalist Hell. That is, I am a minor (17 years) and I live in Tennessee (not by choice (it's not THAT bad here, though)).

I was in a program called TAG (Talented and Gifted) in elementary school, and my mother once said I have genius IQ, which despite meaning little because you can't represent intelligence numerically remains highly flattering. It may have contributed to a very, very miniscule ego (or so I like to think), but it's made me believe I can do better in anything: Tsuyoku naritai! Whenever I have an interest, I pursue it; I've been like that for a long time. So the net gain was, I think, worth it, even if her statement may have been untrue.

I am currently trying to do well in school while shoving as much coding, science, math, language, musical theory, and history in my head. I plan on getting a HAM radio license very soon. I'm also trying to cleanse myself of bias now. My dream college would be MIT, but that is one heck of a reach school, no matter who you are. I also need to figure out how to insert my little segues into my monologue without parenthesis, because wow does that look weird. Maybe I'm just being self-conscious. (But that's a GOOD THING!)

The traditional recreational activities I partake of include reading, piano, backpacking, and videogames (I'm digging into the original Deus Ex with delight right now). I also need to read the sequences; I've only sampled bits and pieces like an anorexic at a chocolate buffet.

Comment author: Ben_LandauTaylor 28 July 2013 03:08:10PM 4 points [-]

If you come to visit MIT, and you happen to be around campus on a Sunday, we'd love to have you at one of the Boston meetups. Also, if you want to talk to some MIT students or alumni, let me know and I'll see if I can put you in touch.

Comment author: BerryPick6 28 July 2013 03:58:57PM 3 points [-]

I sometimes forget how much untapped potential in term of networking opportunities Less Wrong holds.

Comment author: noahpocalypse 28 July 2013 06:17:58PM 2 points [-]

I didn't realize it at the time, but that's further incentive to attend MIT: I can actually go to LW meetups!

I don't see myself touring the school any time soon (I've done plenty of research via the admissions blogs and other testimonials, and plane tickets happen to be expensive), but I would love to discuss any peculiarities you don't learn about until being a student, or anything else I should know before applying.

Comment author: Baruta07 01 August 2013 12:14:52AM 0 points [-]

I might also take you up on that offer if you are willing. I've been considering MIT as a university since I heard that it has a insanely good Bio (and everything else) program. I'm currently getting my citizenship, reporting as a birth abroad (I'm 17 and have all the necessary qualifications) and want to do better than attending the ULeth Bio program as while it is decent it's nowhere near as good as MIT or any of the good universities in the states. Sorry if I seem overeager, It's just that things are a little stressful for me to pick a University at the moment. Sigh according to my friends I am insanely lucky, but I want to do better than chance.

Comment author: nicdevera 28 July 2013 04:40:39AM 4 points [-]

Hello again. Used to post as "ZoneSeek" but switched to my real name. I'm from the science/science fiction/atheist/traditional rationality node, got linked to LW years ago through Kaj Sotala back in the Livejournal days. I have high confidence that I am the only LessWronger in the Philippines.

Comment author: ygert 28 July 2013 05:03:29AM *  12 points [-]

You know, a feature it would be nice to have on LessWrong is a namechange feature. I too have had thought about moving over to my real name, but that is painful, you know? I'd have to start over from complete scratch. I guess it wouldn't be so bad, I've only been posting here for a year, and the pain will only get worse the more I put it off, but it would be much nicer if there were a button I could click to just change my username. Yes, put on it some safeguards, like have it say on my userpage what my username used to be, and maybe even have it cost karma or something, to prevent it from being overused.

Of course the real problem is that someone needs to actually go and make the changes in the code, and that takes work. There likely are higher priority changes just waiting vainly for someone to implement them, as TrikeApps does not have the manpower or resources to work on LessWrong save once in a blue moon. So it's unlikely this will happen in the foreseeable future. But if someone sees this, and wants to implement it, go ahead! I'm sure quite a few people would appreciate it.

Comment author: lukeprog 30 July 2013 05:45:19AM 13 points [-]

"Show my real name" is a feature under current development, as of about 2 weeks ago.

Comment author: ciphergoth 30 July 2013 09:31:01AM 3 points [-]

That is wonderful news - thank you! It sounds like we will have both usernames and real names, and both will be displayed, which is exactly as it should be. Thank you Tricycle!

Comment author: TRManderson 28 July 2013 06:14:25AM *  6 points [-]

Hey there LW!

At least 6 months ago, I stumbled upon a PDF of the sequences (or at least Map and Territory) while randomly browsing a website hosting various PDF ebooks. I read "The Simple Truth" and "What do we mean by Rationality?", but somehow lost the link to the file at some stage. I recalled the name of the website it mentioned (obviously LessWrong) from somewhere, and started trying to find it. After not too long, I came to Methods of Rationality (which a friend of mine had previously linked via Facebook) and began reading, but I forgot about it too after not too long. At some stage about 4 months ago I re-discovered MoR, read about 3/4 of what was available and then started reading LessWrong itself.

It took me about 3 days to get my head around the introduction to Bayes' Theorem (since implementing a basic Bayesian categorisation algorithm), and in the process I realised just how flawed my reasoning potentially was, and found out just how rational one friend of mine in particular was (very). By that stage, I was hooked and have been reading the sequences quite frequently since, finally making an account at here today. There's still plenty more reading to be done though!

A little background (and slight egotism alert, which could probably be applied to everything here); I'm in my final year of school now, vice-captain of the school's robotics program (and the programmer of Australia's champion school-age competitive robot), debating coach to various grades and I've completed a university level "Introduction to Software Engineering" in Python using Tkinter for GUI stuff as I finished the Maths B course a year early. I'm planning to go into university for a Bachelor of Science/Bachelor of Engineering majoring in Mathematics/Software Engineering next year. I've got major side interests in philosophy and psychology which I currently don't plan to explore in any formal sort of way, but LessWrong provides an outlet that addresses with these two.

I look forward to future comments and whatever criticism they attract; learning from mistakes tends to stick rather well.

Comment author: Zian 28 July 2013 08:03:52AM *  10 points [-]

Hi there!

I found HPMoR via TVTropes and then found LessWrong via HPMoR. I decided to hang around after reading the explanation of Bayes Theorem on Eliezer's personal site and finding it quite nice. Also, it matched up with how I thought of Bayes's theorem. You could say that I got attracted to LW by confirmation bias. :)

On a more useful note, I got interested in rationality/etc. through a somewhat convoluted path. I got introduced to Bayes Theorem via Paul Graham when I built a website filter for a science fair project.

My reading material also contributed heavily. I've also always been a fast and constant reader so discovering the (FREE!) interlibrary loan offered by the University of California was a boon. Major nonfiction books that affected me were cognitive science stuff (especially Dan Ariely) and books on how things/processes/systems work I distinctly recall re-re-re-checking out a book on landfills and waste management in elementary school because it was long enough to be somewhat thorough and had enough photos to be interesting. Major fiction influences include books by Thornton Burgess, the Redwall series, and David Brin. I got introduced to the concept of fanfiction by the Redwall Online Community and spent many years in related activities so it wasn't too much of a leap for me to take HPMoR seriously. Getting keyword matches between Ariely and HPMoR kept me hooked, never mind the bit about arbitraging gold and silver, which I can't believe Harry hasn't tried doing by now.

Another thing that helped me take the ideas in Less Wrong seriously was my constant desire to re-examine by beliefs. For example, I've always been interested in the ideas in Christian apologetics.

As for where I started at LW, I can't really say. I know I read stuff that confirmed what I already knew like things about the Planning Fallacy. The first bit of new material was probably Mysterious Answers (and those in its sequence).

Comment author: jackal_esq 28 July 2013 10:52:12AM 12 points [-]

Hello, thank you for this post. I am a criminal law attorney, and what attracts me to learning more about rational decision-making is the practical experience that juries, clients, and many attorneys make what seem to be irrational, or at least counter-intuitive, decisions all the time. I am in the very early stages of trying to learn what's on the site and how to fix my own thought processes, but I also have irrationally high hopes that there's achievable progress to be made by bringing the LW tools to bear on my profession and the legal regime. I look forward to talking it through with you all.

Comment author: KnaveOfAllTrades 28 July 2013 12:56:06PM *  9 points [-]

Hi, jackal_esq. As someone involved in criminal justice, you might find the following interesting, if you haven't seen them already:

Evidence under Bayes theorem, Wikipedia
R v Adams, Wikipedia
Sally Clark, Wikipedia
Amanda Knox case, Less Wrong (followup post linked at bottom)
A formula for justice, Guardian
Bayesian analysis under threat in British courts, Less Wrong

Aside from that, welcome to Less Wrong!

Comment author: AndHisHorse 28 July 2013 07:53:11PM 6 points [-]

I'm Alex, an American male doing undergraduate studies in Physics and Computer Science. Two years ago, I stumbled upon HPMoR, and made my way to this site shortly after. I've been lurking since, and in that time, I've seen top-level posts that have convinced me to abandon my half-formed theism, try out the pomodoro method (results still pending), and police myself for biases. I'm interested in lifehacking (though I acknowledge that I have a great deal of inertia in that area), and will be trying Soylent at some point in the next few months.

Comment author: zanoi 29 July 2013 03:03:53PM 10 points [-]

Hello, my name is Jonas and I'm currently working as a software engineer.

I happened to learn about biases in decision analysis class at university and was hooked instantly. It was only later that I learned about LW. I'm very interested in not just learning about rationality on a theoretical level but actually living it out to the fullest.

I'm very thankful to LW for improving my life so far, but I guess the best is yet to come.

Comment author: apeterson 30 July 2013 08:23:39PM *  8 points [-]

I'm Anthony. I found out about Less Wrong from Overcoming Bias, and I found out about Overcoming Bias about 2 years ago when Abnormal Returns, which is like a sampler of all kinds of posts on the econ-blogsphere, linked to Overcoming Bias.

I had previously decided that the singulatarians were crazily optimistic. I thought they were all about the future being unimaginable goodness all the time. I guess that was my interpretation of Kurzeil. I thought they were unrealistic about the nature of reality. I don't believe that the singularity will hit in a few decades, at least I don't understand the arguments enough to think that yet, but it is an interesting topic

I used to be part of an Objectivist campus club at the University of CU-Denver. And then an Objectivist magazine promoted the idea of nuking Afghanistitan in response to 9/11. And also I discovered Michael Shermer's "Why People Believe Strange Things", and specially the chapter calling out Objectivism as a cult. I fought against the idea of Objectivism being a cult for a long time, but then I started to be convinced, and I eventually abandoned Objectivism completely.

But reading HPMOR, the sequences and some of the other posts here has been really informative and fun. I especially liked the Quantum Mechanics sequence, it really cleared up some of my fogginess on the subject, and made me want to know more. I am now working through the "Structure and Interpretation of Quantum Mechanics". Just the linear algebra in the latter half of Chapter 1 goes way behind anything I learned in college, so it is still slow going, but I have learned a lot about Linear algebra (projection operators. How to take a norm of a complex-valued vector, etc.)

I live in the Northern Lower Penninsula of Michigan. Its pretty rural up here. There aren't many jobs in IT around here, but I have one of them. Its a lot less specialized that I'm sure most IT jobs are. I do purchasing, PC support, in house app programming, printer support and on and on. I'm in the middle of a difficult programming project that's taken 2 years, because I am the only programmer here, and I can't spend full time on the project.

I see that there was recently a meetup in Detroit. I might have to make the drive south for the next one, if there is another one.

Anyway, I decided to it was time to get more involved and learn more actively. So I registered rather than continuing to lurk.

Comment author: shminux 30 July 2013 08:48:17PM 3 points [-]

I am now working through the "Structure and Interpretation of Quantum Mechanics"

Good for you. Checking multiple sources is very rational :) If you get stuck, the Freenode ##physics IRC channel often has physics undergrad and grad students around to help with the technical stuff, though discussing interpretations is generally not encouraged.

Comment author: apeterson 30 July 2013 09:07:25PM 0 points [-]

I will definitely check that out. Thanks.

My other thought is to also get a linear algebra book that covers infinite dimensional vectors.

Comment author: shminux 30 July 2013 11:08:18PM *  -1 points [-]

My other thought is to also get a linear algebra book that covers infinite dimensional vectors.

This is useful for, say, the hydrogen atom or the simple harmonic oscillator, but you can learn a lot just from the spin 1/2 quantum mechanics, which is quite finite-dimensional. It is sufficient for all of quantum information, EPR, Bell inequalities, etc. If you are interested in "quantum epistemology", Scott Aaronson's Quantum Computing since Democritus is an excellent read and would not overtax your math skills.

Comment author: Inverse 30 July 2013 09:30:18PM *  9 points [-]

Hello! I'm Alex. I'm an undergrad currently studying economics and finance in the Bay Area. I think I first heard about Less Wrong on TVTropes, of all places, which lead me to HPMOR and then here. I bookmarked the site and forgot about it until pretty recently, when I came back and started reading articles and comments. I'm currently reading through the Major Sequences.

I'm very interested in economics and game theory, which defintely has a lot of overlap with rationality and behavioral science. Recently I've been learning computer programming as well. I guess I started to identify as a rationalist a few years ago, but there was never one set moment for me - it's something I think I've always valued. I love to learn and read and I suppose ideas involving rationality and cognition was just something that stuck out to me as interesting.

Other than that, I'm a big fan of Major League Baseball, and lately I've been attempting to write and record music. I'm definitely glad I found LW and am looking forward to reading more and hopefully being an active community member.

Also, I'm noticing quite a few similarities between the commenting and profile system here and the system on Reddit... anyone know if that was intentional?

Comment author: Randaly 30 July 2013 09:52:38PM *  5 points [-]

Hi Alex, I'm Alex!

Less Wrong's code is based off of Reddit's system. Reddit made their code base open-source in June of 2009; Less Wrong then forked it.

Comment author: NotInventedHere 31 July 2013 01:10:22PM 7 points [-]

I'm NIH, I'm 17, and I discovered this site through HPMOR in late 2010.

At that time I read "The Problem With Too Many Rational Memes", closed the tab and forgot about it for two years. In spring 2012, I discovered that there was a new arc for HPMOR, read it and decided that some of EY's other works might be worth reading. Over the summer I began to lurk heavily, culminating in me reading the "Blog posts 2006-2010" EPUB from start to finish in November, which led to me registering.

I'd like to make a prediction of High (80%) confidence that I am the only LW user residing in Nigeria. Living here has been a very frustrating experience on the whole, but after three years I can say that I've adapted fairly well. While I lived in Canada, I was placed into the Gifted stream in elementary school, which provided me with the majority of my friend group in the meatspace, and aside from the direct consequences of socializing with said group almost exclusively, I can't really say how it's affected me.

For my tertiary education I'd like to study Computer Science, and I'm currently leaning towards the University of Waterloo. Due to the way the result schedule is structured here in Nigeria, that will require me to write my matriculation exam this November, as opposed to the usual time for someone in my class of June 2014. I'm being advised by almost everyone I've spoken to that to enter a Canadian University I would be best off repeating 12th grade for the Canadian Diploma, so because of that I am not particularly stressed about having to write two sets of final exams this year.

My interests include reading (Favorite authors are Iain M. Banks, Terry Pratchett and William Gibson), computer hardware, tabletop role-playing-games, programming (Python and some elementary webdev) and video games.

Comment author: Baruta07 01 August 2013 01:16:03AM *  7 points [-]

My name is Alexander Baruta. People call me confident, knowledgeable, and confident. The truth behind those statements is that I'm inherently none of those. I hate stepping outside my comfort zone; as some of my friends would say "I hate it with a fiery burning passion to rival the sun". As a consequence I read a ton of books, I also have only had one good ELA teacher. My summer school teacher for ELA 30-1 (that's grade 12 English for those of you outside Canada), I'm in summer-school not because I failed the course but because I want to get ahead. I'm going into grade 12 with 3 core 30 level subjects completed. (although this is offset by the 2 additional science courses I want to take).

I spent most of my life in a christian environment and during that time I was one of those that thought humans could do no evil, Queue me being bullied. While nothing major, it was enough to set me thinking that what I'd been taught was wrong. I spent many years (Grades 6-9) trying to cope with my lack of faith, and as a result decided that the Bible was wrong. I don't know when I was introduced to LW, I think I found it simultaneously through TVTropes (warning may ruin your life), HPMOR, and Google. Since then I've been shocked at the attitude towards education in Alberta, for instance Bayes Theorem was on the Gr 11 curriculum six years ago and has since been removed along with the entirety of probability theorem to be replaced with what I like to call 1000 ways to manipulate boring graphs. I attend a self directed school.

One reason for the length of my explanation is that I want to expand my comfort zone, It is one of my major goals because I am an introvert, If any of you set any store by the Myers-Briggs test I am an INTJ. As a result of my introversion it is rather difficult for me to make any close friends, (although it is atrocious practice, I suspect that I am an ambivert: someone possesing both introverted and extroverted personality traits. When I am in a comfortable setting I am the life of the party. Other times I simply find the quietest corner and read). I am attempting to overcome my more extreme traits by taking up show-choir (not like glee at all I swear) and by being more open with myself and others. Due to pure chance I am going to become the holder of a Canadian-American duel citizenship and as a consequence able to attend a university in the states. Due to even more fortunate circumstances I am having at least a percentage of my tuition paid for by one of my relatives.

Some of my more socially unusual traits are things that are practically open secrets to my acquaintances. (Right now the mantra is I need to do this) I am a member of the Furry Fandom, and a Transhumanist (rather ironic really), as well as a wannabe philosopher. (Nietzsche, Wittgenstein, as well as some of the earlier ones such as Aristotle, not to be confused with Aristophanes) I thoroughly enjoy formal logic as well as psychology and neurology. I fear being judged, but I also welcome that judgement because I can use criticism to help me see beyond my tiny Classical perspective ingrained by my upbringing.

In terms of literature I enjoy mainly Sci-Fi/Fantasy, and science (Although I do enjoy a little romance on the side, Iff it is well written, and thanks to my wonderful ELA teacher I am learning to enjoy tragedy as well as comedy). My favorite authors include: Brandon Sanderson, Neil Gaimon, Issac Asimov, Terry Pratchett, Ian. M. Banks, Shakespeare (yes Shakespeare), G.K. Chesterton, and Patrick Rothfuss, As well as some specialized authors of Furry Fiction. (Will. A. Sanborn, Simon Barber, Phil Guesz [pronounced like Seuss was originally pronounced]) In some capacity I also study what rationalists consider to be the dark arts, as I participate (and do rather well in) a debate club. (8th overall in the beginner category). However in my defense I need the practice of arguing with someone else in a reasonably capable capacity because I tend to have trouble expressing myself on a day to day basis. (Although the scoring system is completely ridiculous, it marks people between 66-86 percent and does not seem capable of realizing that getting a 66 is the exact same thing as a 0...) Again sorry for the wall of text... it's a bad habit of mine to ramble. I just needed to finally tell someone these things.

~Actually, consider this as my: Lurker Status=Revoked post. I did one intro when I'd just joined and have been commenting on various things including me mixing up Aristotle and Aristophanes to amusing results.

Comment author: luminosity 03 August 2013 01:07:49PM 3 points [-]

Welcome!

You should consider breaking this post up into paragraphs. There's just too much unstructured text for me to want to read more than a few lines.

Comment author: Baruta07 03 August 2013 07:33:30PM 2 points [-]

Right, Paragraphs. Knew I was forgetting something!

Comment author: ILikeLogic 11 August 2013 07:14:24AM 1 point [-]

Pratchett and Gaiman co-authored a book called 'Good Omens'. I highly recommend it.

Comment author: Baruta07 13 August 2013 09:31:34PM 1 point [-]

I've already read it thanks. To anyone else reading this 'Good Omens' is thoroughly funny and a all around good read.

Comment author: Polymeron 11 August 2013 07:34:58AM 0 points [-]

Interestingly, my first reaction to this post was that a great deal of it reminds me of myself, especially near that age. I wonder if this is the result of ingrained bias? If I'm not mistaken, when you give people a horoscope or other personality description, about 90% of them will agree that it appears to refer to them, compared to the 8.33% we'd expect it to actually apply to. Then there's selection bias inherent to people writing on LW (wannabe philosophers and formal logic enthusiasts posting here? A shocker!). And yet...

I'm interested to know, did you have any particular goal in mind posting this, or just making yourself generally known? If you need help or advice on any subject, be specific about it and I will be happy to assist (as will many others I'm sure).

Comment author: Baruta07 13 August 2013 09:33:26PM 0 points [-]

Actually I had multiple reasons for posting this. Firstly it's to make myself known to the community. As an ulterior motive I have trouble with being open with others and connecting (although I suspect that this is a common problem) and I want to get over my fear of such.

Comment author: Nicholas_Rutherford 01 August 2013 03:26:21AM 16 points [-]

Hello everyone, I'm Nicholas Rutherford! I'm a 21 year old undergraduate student at the University of Saskatchewan studying pure math.

My original start to rationality is due to OK Cupid (hooray for on line dating!). After being fed up with the lack of people in my area I decided to see who my top world wide match was (It turns out that this 'top' person will actually change so I guess I lucked out). This person's profile was written in a very clear, well thought out manner and the answers to their questions showed that they had a fantastic decision making process. After chatting with them they told me the secret to their knowledge was less wrong.

From there I started making my way through The Sequences (currently about 40% of the way through), reading HPMOR and lurking the general discussion board here. I also had the pleasure of attending the July 2013 CFAR workshop, which has really inspired me to focus on improving my rationality and actually being a part of the community (and not just a lurker).

This community is awesome and I can't wait to improve it in any way I can! I mean, it is the least I can do after all I've gained from it :)

Comment author: [deleted] 01 August 2013 07:28:01PM *  6 points [-]

Hello. I'm a typical geeky 20-something white male who's interested in science and technology. I'm a Bachelor in economics and business. Not a native English speaker.

From the time I was 12 I've spent most of my time surfing around the internet reading about interesting things and generally wasted my time and being alone. A few years ago I was really depressed and had a plan for suicide. Once in a while I've done something actually useful. That's my life in a nutshell.

I have always thought of myself as somewhat rational in the traditional sense when I'm not emotionally charged, but so do most of the people, I'd say. Who would be intentionally irrational?

When I first heard about LessWrong on 4chan/sci/ a few years ago, I heard only about negative things of it. I got the impression that this is basically some kind of daydreaming cult for people who are interested in the singularity and transhumanism. Like people just write about some things that sound kinda important and deep in a pop-science manner, but don't want to do anything more quantifiable or exact or something that's more difficult, like real science. I got the impression that it's not something you're supposed to take very seriously.

Okay, a few years go by, I start to be more interested in futurology and stuff. I stumble upon Luke Muehlhauser in his reddit AMA and the things he talked about in his AMA sounded kinda cool, something I've never really thought before and I read a few of his papers (Intelligence Explosion and Machine Ethics, Intelligence Explosion: Evidence and Import). After this I forget this thing again for a year until I read his book "Facing the intelligence explosion" in which he goes to lengths to talk about LessWrong so I decide to take a look.

So I read the sequence "How To Actually Change Your Mind" and there where some useful things to consider if I want to be neutral in the face of evidence and change my mind about things. This bayesian approach to rationality or whatever-it's-called sounds pretty reasonable and I think I want to learn more of it. In the meantime I read Eliezer Yudkowsky's HPMOR and "Cognitive Biases Potentially Affecting Judgement of Global Risk" and a few random LessWrong articles here and there. Sometimes Eliezer Yudkowsky sounds so full of himself, like he knows everything about everything, that's it's pretty annoying. His narcissism and self-proclaimed geniosity reminds me of Stephen Wolfram. But I like his optimism, he has really useful ideas to share about rationality and he's good at writing.

I also started to think, that if these people are trying to be so rational then why do so many of them hold seemingly irrational beliefs about some things without much quantifiable evidence. I mean, I have a gut feeling that the singularity will probably happen at some point if there isn't some societal collapse, but it's far from certain and may not happen the same way FAI advocates anticipate. The event is so far in the future and there are so many factors related to it, so I'm not sure how well you can predict how it happens and say meaningful things about it. Someone here made a good remark about it:

Furthermore, experts perform pretty badly when thinking about dynamic stimuli, thinking about behavior, and feedback and objective analysis are unavailable.

Predictions about existential risk reduction and the far future are firmly in the second category. So how can we trust our predictions about our impact on the far future?

I also agree with many of the points raised in this post. I think the work MIRI is doing might be useful and I'm not against it, but I wouldn't personally allocate my resources towards it at this point, at least not money. Karnofsky criticized that MIRI doesn't take into account many variables he has considered, but on top of that there must be even MORE variables MIRI hasn't taken into account.

There are many beliefs here that seem to be based on non-quantifiable hypotheses. You would think that if you took a bunch of rationalists who applied the methods of rationality correctly and were willing to change their minds about their beliefs, the likelihood that they had the same fringe-beliefs based on non-quantifiable evidence would be pretty small. Note: I don't know everything about the community here, this is just from the little time I've spent here.

I hope MIRI, transhumanism, cryonics, polyamory etc. are not inherently connected to LessWrong and its approach to rationality?

I still have a cautiously positive view about this community. Even though I dislike some of these fringe opinions, I'm still interested in decision theory and in this kind of approach to rationality, which I don't think is fringe at all and I'm willing to learn more about it. I'm kinda slow thinker and sometimes it feels when I'm around people that I'm less intelligent than others and it takes longer for me to process things than the people around me. By making good decisions I could minimize the impact of situations where my well-being depends wholly on quick thinking.

But I don't except very much practical success and most of all, I think of this as a form of entertainment ("epiphany porn" as you like to call it) and when I have more important things to do, I will probably set this thing aside.

Comment author: octopocta 01 August 2013 07:58:24PM 4 points [-]

Hi there, I'm a Biologist turned Software Engineer, age 34. I came to Less Wrong through Overcoming Bias and HPMOR, and I'm still here because the notions of rationality appeal to me. It is nice to among others who hold rationality as an ideal to aspire to.

Comment author: ExaminedThought 01 August 2013 10:54:17PM *  17 points [-]

My name is Crystal, and I'm 25 years old. I don't see a lot of female names around here but I guess I'm used to that. I was the only girl in most of my college engineering classes. I was the only female programmer where I worked.

I've always tried to be rational in an intuitive sort of way. Knowing the truth is one of my prime life motivators, and I've always been strange to others because of it.

People so often don't want the truth because it hurts, but I've tended to treat that hurt like a thrill seeker would. I came up with a little motto for myself, "We grow by questioning what we know."

I have a shit ton of curiosity. I ask questions along the lines of "I wonder why..." that make people look at me like I've grown a second head. When ideas sound really strange, I want to investigate them. I grew up reading my dad's moldy Asimov books. I love reading and read tons of non-fiction, sci-fi, and fantasy.

Growing up, I wanted to be a scientist and a writer. I wanted to do both, perhaps, because I thought writers could never make any money. But I also wanted to help colonize space, which from a young age I thought was crucial to humanity’s survival.

After I finished my Computer Engineering and Computer Science degree, I set out to find a job. I'm not totally sure why I picked that major. I think I wanted to build Asimov's robots, but I'm not totally sure of my motivations now. I took the highest paying job offer I received and then I worked like a good worker bee. I was firmly on the Proper Path according to everyone I knew.

And I wanted to shoot myself in the head because programming business portals and websites for big chain restaurants was so utterly incompatible with how I saw the world and how I wanted to spend my time. I did not think I was contributing anything positive to the world with my work. I decided to leave my job without knowing what I wanted to do instead. I've been living on my savings and thinking I've been experiencing a dreadful existential crisis.

Through my years of researching ideas that struck me as strange, I've collected more and more traits that normal people find weird. I don't like ever wearing shoes. I'm a minimalist. I don't think advertising is moral. I don't buy much of anything physical. I don't think businesses have much incentive to be healthy or ethical. I'm not a Democrat or a Republican. I don't think voting third party helps. I've lied to my family that I've actually voted because they're horrified that someone wouldn't. I stopped eating meat. I'm an atheist. I've just recently heard of transhumanism and it feels like I've FINALLY found like minded people in that regard. I could come up with more, but that's probably enough.

I started blogging years ago and had no idea what the overall theme of the crap I was writing was. It took me a lot of writing to figure out what I was even trying to say. But I think what I've been writing about is my own convoluted stumbling around in the dark on how to think better. Trying to be rational when I didn't know Rationality was a Discipline.

I stumbled across HP:MoR on July 19, 2013 and read it all within four days. I always thought the snitch was stupid. And Ron was annoyingly dumb. I've read a ton of pop science books and even knew of some of the studies and biases that Harry/Eliezer talked about. Reading it was a weird feeling of focusing so much of what I've been stumbling around with into a Coherent Thing.

Then I moved on to the sequences that were referenced from HP:MoR. I downloaded the ebook copy for my Kindle. I've only read 20% of them, but I feel the very way I think changing again. A lot of it confirms what I already thought, but there's also epiphany after epiphany. When I go to sleep lately, I notice my thoughts are a nonsensical blend of probability this and that.

The sequences are like what I almost blindly wanted to do with my blog. But the sequences are obviously a million times better. What little I've written is put to shame. I would consider myself a Rationalist now only because I am on the path. I am very unconfident in my skills at this point.

Comment author: SaidAchmiz 01 August 2013 11:17:04PM 0 points [-]

Welcome to Less Wrong!

I don't have much else to say, except that several of your "traits that normal people find weird" are ones I share:

I don't think advertising is moral.

I've been approaching that view myself, more and more, but I don't think I've seen this talked about much here (not directly, anyway; a lot of the "Dark Arts" / manipulation discussions are applicable, though). I think it would be cool if you wrote a post or two about your thoughts on this issue. (And/or linked to any related blog posts you might have, if you're willing.)

I don't think businesses have much incentive to be healthy or ethical.

Agreed.

I'm not a Democrat or a Republican. I don't think voting third party helps.

Also agreed. This view, I think many people here share.

I've lied to my family that I've actually voted because they're horrified that someone wouldn't.

Yes, my family has a similar reaction to the idea of not voting.

Comment author: Kawoomba 01 August 2013 11:26:19PM 0 points [-]

I've been approaching that view myself, more and more, but I don't think I've seen this talked about much here (not directly, anyway; a lot of the "Dark Arts" / manipulation discussions are applicable, though). I think it would be cool if you wrote a post or two about your thoughts on this issue. (And/or linked to any related blog posts you might have, if you're willing.)

Click me!

Comment author: SaidAchmiz 01 August 2013 11:40:14PM 0 points [-]

Thanks!

Hm, well, it seems that I agree with the recommendations in the post; I use AdBlock (and get rather angry when certain websites try to guilt-trip me about doing so), and I don't watch commercials on TV (by not watching shows on TV at all). (Here's a question: does anyone know of a way to get rid of ads in Youtube videos?)

Of course, living in a city, it's difficult to avoid advertisements entirely. Billboards are all over the place.

What I'd like to see are discussions about the ethics of advertisement — that is, is it just unethical for companies to use these techniques? (And if so, what forms of advertisement are ok?) Is it unethical to advertise at all? My intuitions say "yes" to the former and "no" to the latter, but I haven't examined said intuitions very deeply.

Comment author: ExaminedThought 02 August 2013 12:48:27AM 0 points [-]

I don't actually see ads on YouTube and assumed it was because of AdBlock.

Comment author: SaidAchmiz 02 August 2013 02:54:30AM 0 points [-]

Aha — it seems the extension you suggested is Adblock Plus (lowercase b), whereas I had been using an unrelated one called AdBlock (capital B, no "Plus"). I've now switched and the YouTube ads seem to be gone!

Comment author: ExaminedThought 02 August 2013 12:47:49AM 0 points [-]

I was going to link to that. You beat me at linking to my own post!

Comment author: Kawoomba 02 August 2013 06:11:43AM 1 point [-]

Welcome to LW. :)

Comment author: homunq 07 August 2013 04:29:25AM 1 point [-]

Note: the post talks about priming research. I made the following comment there:

The research on walking slower was not reproduced in a double-blind study; so they tried to reproduce it with a non-double-blind study and succeeded. In other words, the evidence suggests it was purely a matter of experimenter expectations, not the old-associated words at all.

This doesn’t invalidate your conclusions, but I just wanted to let you know.

In general, a lot of research on priming is statistically dubious. There are a few robust findings, but there's also a lot of stuff that doesn't hold up under closer examination.

Comment author: homunq 07 August 2013 03:52:07AM *  0 points [-]

I'm not a Democrat or a Republican. I don't think voting third party helps.

Also agreed. This view, I think many people here share.

I'm sure many do; I agree with both statements. But I would caution against caching, or worse, identifying with, the belief that voting in general is pointless or otherwise not to be done.

As to my agreement with the beliefs stated: political identification is certainly a mind-killer, so it's a good idea not to identify internally as a member of a political party. Also, the existing major parties, and their leaders, are inevitably badly flawed, but using your single plurality vote (the only one you get in most English-speaking countries) to support a third party candidate isn't going to accomplish anything.

But I'd still encourage people to vote.

I have an ulterior motive for saying this. Personally, I feel the need to have some amount of not-entirely-rational hope to keep me going. I find some of that hope in voting system reform (which is also a gratifyingly interesting hobby). This sort of structural reform has little chance of succeeding if all the people who are unhappy with the current system become identified with not voting.

But even if you do not share my interest in this reform, I think there are times when participating in politics (which generally includes voting as one of the most basic steps) is a sensible and useful thing to do. The major parties will always be very flawed, but there are times when one of the choices on the ballot is clearly more flawed and when the power of participating is significant.

Comment author: SaidAchmiz 07 August 2013 05:50:23AM 0 points [-]

But I would caution against caching, or worse, identifying with, the belief that voting in general is pointless or otherwise not to be done.

Would you caution this more strongly than you might caution against caching, or identifying with, any other comparably-specific belief?

But even if you do not share my interest in this reform, I think there are times when participating in politics (which generally includes voting as one of the most basic steps) is a sensible and useful thing to do.

Let's say we agree that "participating in politics" is a sensible and useful thing to do (I don't, for many nontrivial meanings of the phrase, but this is for the sake of argument). Is voting actually a meaningful, or effective, or necessary way to go about doing so? If so, why and how?

The major parties will always be very flawed, but there are times when one of the choices on the ballot is clearly more flawed and when the power of participating is significant.

Are there many instances when one choice is clearly more flawed, such that you can see this in advance, and you also have a nontrivial chance of affecting the outcome with your participation?

For example, let's say it's 2012, and I think Obama is horrible, just horrible, and that him being re-elected would be a disaster (and I also somehow know that Romney will be a good president). I am in New York. What would you say, roughly, is the chance that with my vote, Romney takes NY, but without my vote, Obama takes NY?

Comment author: homunq 07 August 2013 12:25:57PM 1 point [-]

Would you caution this more strongly than you might caution against caching, or identifying with, any other comparably-specific belief?

Depends on what you mean by "comparably-specific". The belief I spoke of was a generalization: that because a certain set of elections were not worth worrying about, that all future elections will not be. A notable feature of elections is their variability; it is clearly the case that results vary.

Is voting actually a meaningful, or effective, or necessary way to go about [participating in politics]? If so, why and how?

A single vote is massively unlikely to affect anything important. Political campaigns, however, can have a reasonable probability of doing so. Campaigns are about convincing large numbers of people to vote in a certain way. The messages you put out about whether or not you intend to vote affect your friends. A 2012 study using a facebook button showed that by voting themselves, individuals could bring 4.5 other voters to the polls. Obviously the specific circumstances of that study are not likely to repeat, but the overall message that it's about more than just your one vote are likely to be applicable more generally. If you intend to canvass or phonebank, of course, this is even more relevant; it is likely that voting yourself is a better investment than trying to lie effectively about whether you believe individual votes matter.

Are there many instances when one choice is clearly more flawed, such that you can see this in advance, and you also have a nontrivial chance of affecting the outcome with your participation?

Again, we'd have to define the terms, but if you have a significant altruistic term in your utility function I think it's a good bet.

Your choices are to be a habitual voter, a habitual nonvoter, or an occasional voter based on individual calculations of the expected value of each election. Whichever choice you make is leaky; if you have friends, they will be influenced by your decision. In this circumstance, being an occasional voter seems unlikely to be rational; your outlay on calculating the expected value, and the reduced contagion of your voting decision even when you do find that a specific election is worth it, probably overwhelm the trivial effort you save by not voting.

So the question is, is it worth a few hours a year to be a habitual voter? It would be easy to overestimate the cost, but remember, this should be compared not against the most effective possible use of those hours, but against the average effectiveness of your non-work hours. In dollar terms, this is probably a lifetime cost in the high four or low five figures. There is at least 10 times that money at stake in even the most trivial local election. You have to discount that by the weight of the altruism term in your utility function and by the average difference in quality between frontrunners, but for me those terms together shrink it by less than half an order of magnitude, so I'll ignore them.

So if there's better than a 10-30% chance that you will participate in an election with a margin of under around 5 votes (your vote plus the net margin of your social penumbra divided by two) in your lifetime, then voting is worth it. At 4 small local elections a year for 50 years, that means that if average margins are less than about 600-2000 votes on those elections, then it's likely to be worth it, without accounting for any intrinsic values (such as the feeling of having participated). That's in the right ballpark.

What would you say, roughly, is the chance that with my vote, Romney takes NY, but without my vote, Obama takes NY?

Roughly zero. And you'd multiply that by the chances that the national election swung on NY, which are also small. So great, you've found an example where voting wasn't worth it. Do you think it's safe to generalize from that example?

As I argued above, the main value of being a habitual voter is in convincing your friends to vote in small local elections; and yet you will probably spend more time talking with them about Obama and Romney than about your local sheriff or school board or judge or public transit administrator. That's not logical, but that's how people are.

Comment author: SaidAchmiz 07 August 2013 01:18:50PM 0 points [-]

Roughly zero. And you'd multiply that by the chances that the national election swung on NY, which are also small. So great, you've found an example where voting wasn't worth it. Do you think it's safe to generalize from that example?

For someone who lives in New York? Yes. Yes it is.

(will respond to rest of your post later)

Comment author: SaidAchmiz 07 August 2013 04:11:04PM 0 points [-]

The messages you put out about whether or not you intend to vote affect your friends. A 2012 study using a facebook button showed that by voting themselves, individuals could bring 4.5 other voters to the polls.

I barely have 4.5 people that I ever discuss politics with, and all of their political views are at least as established as mine. I would be surprised if my voting brought so much as one other voter to the polls.

If you intend to canvass or phonebank, of course, this is even more relevant;

Good god, no!

Whichever choice you make is leaky; if you have friends, they will be influenced by your decision.

This is contrary to my experience.

your outlay on calculating the expected value [of voting], and the reduced contagion of your voting decision even when you do find that a specific election is worth it, probably overwhelm the trivial effort you save by not voting. [...] So the question is, is it worth a few hours a year to be a habitual voter?

Am I really likely to spend more effort on deciding whether to vote than on deciding whom to vote for? Especially in local elections?

The problem is not that deciding to vote is itself some difficult, complex decision. The problem (well, a problem, anyway) is that in any election where I'm even remotely likely to influence the outcome (i.e. local elections), I have to spend a tremendous effort to even get enough relevant information about the candidates to make an informed decision, much less consider and analyze said information. And this isn't even factoring in the effort required to have a sufficient understanding of "the issues", and the political process, etc., all of which are crucial in figuring out what the effects of your vote will be.

One of my friends engages in political advocacy, votes, canvasses, researches candidates, and all that stuff. I see how much of her time it takes up. Personally, I think it's a colossal waste of her intelligence and talents. She could be writing, for example (which she does also, to be fair, but she could be writing more), or doing something else far more interesting and productive.

Also:

It would be easy to overestimate the cost, but remember, this should be compared not against the most effective possible use of those hours, but against the average effectiveness of your non-work hours.

How do you figure this? Why aren't we comparing to work hours? And why are we valuing non-work hours only in money earned?

Comment author: homunq 07 August 2013 06:18:16PM *  1 point [-]

I think we've mostly said what we have to say, and this is off-topic.

My numbers showed that at best voting is instrumentally a break-even proposition. I do it because I find it hedonically rational; for instance, I don't have to lie to my family about it. Part of what makes it a net plus for me hedonically is that I have a vision and a plan for a world where a better voting system (such as approval voting or SODA voting) is used and so I am not doomed to eternally pick the lesser of two evils. I can understand if Crystal makes a different decision for her own hedonic reasons.

I also suspect that metarational considerations such as timeless decision theory would argue in favor of it, because free riding on other people's voting effort is akin to betrayal in a massively-multiplayer prisoners' dilemma. I have not worked out the math on that, but my mathematical intuition tends to be pretty good.

Your description of your friends' advocacy suggests you are attached to the idea that politics is a waste of time, not just for you, but for others. I suspect that belief of yours is not making you or anyone else happier. I recognize that you could probably make the converse criticism of me, but I am happy to prefer a world where aspiring rationalists vote to one where they don't (even when their vote would probably be negatively correlated with mine, as I suspect yours would be).

Comment author: Desrtopa 07 August 2013 06:51:38PM 0 points [-]

I think most of your points here are well made, but

How do you figure this? Why aren't we comparing to work hours? And why are we valuing non-work hours only in money earned?

Most people do not have the option to add more hours of work and thereby receive more money at the same rate. If you work a salaried 9-5, it's misleading to calculate the value of your time as if your hours not already committed to work could be converted to money at the same rate, and even if you do work at a job that allows you to work overtime hours, you'll generally only have the choice of whether to make that tradeoff for specific hours out of your week, not any hour as-desired.

If you're typically employed, your work hours are already committed, so for the most part you only need to evaluate the tradeoffs on your remaining hours.

Comment author: SaidAchmiz 07 August 2013 08:32:09PM 0 points [-]

Well, all of that is actually false for me, as I can work my hours whenever I feel like, but that's moot; I feel like your comment addresses a point other than the one I made.

What I meant was — are we stipulating that voting necessarily takes place during hours when I can't work? Why? That seems unwarranted.

Also, I repeat this part of my question, which none of the above reasoning touches at all:

And why are we valuing non-work hours only in money earned?

Let's say I work a salaried 9-5, have no option to work more, and vote after I leave work.

There's still some opportunity cost. Maybe I miss my favorite TV show or my WoW raid or whatever. Maybe I don't get to spend as much time with my family. Maybe I get less sleep. Why should we ignore such costs?

Comment author: Desrtopa 07 August 2013 10:13:43PM 0 points [-]

I agree that it's not wise to ignore the associated opportunity costs, but it's a rather common fallacy (at least, one that's popped up quite often here) that one's time is fungible for money at the rate one is compensated for work.

On the other hand, for many individuals there are also likely to be associated gains, such as the fact that voting tends to be widely viewed as an effective signal of conscientiousness. Personally, whatever my feelings about the likelihood of my vote having a meaningful effect on the course of an election, I would prefer most of my acquaintances to think of me as the sort of person who votes.

Comment author: SaidAchmiz 07 August 2013 11:12:05PM 0 points [-]

I, on the other hand, would really rather not be thought of as the sort of person who votes.

Who are your acquaintances that they view voting as an effective signal of conscientiousness? Like... normal people, or something? Because that's weird.

Comment author: TheOtherDave 07 August 2013 08:32:36PM *  0 points [-]

I have to spend a tremendous effort to even get enough relevant information about the candidates to make an informed decision

I waffle about this a lot.

Sure, one effect -- perhaps even the overwhelmingly primary effect -- of my vote is to influence which candidate gets elected, and to use that power responsibly I have to know enough to decide which candidate would be better to elect, which requires tremendous effort. (Of course, that's only an argument for not-voting if responsibly using my power to not-vote doesn't require equal knowledge/effort, but either way that's beside my point.)

But another effect is to reward or punish campaigns, which has an effect on the kind of campaigns that get run in the future, and it often seems to me that this is worth doing and requires less knowledge to do usefully.

Of course, the magnitude of the effects in question are so miniscule it's hard to care very much in either case.

Comment author: Vaniver 01 August 2013 11:21:54PM 2 points [-]

Welcome!

I don't like ever wearing shoes.

Have you tried out Vibrams? I have found them to be a delightful shoe replacement.

I am very unconfident in my skills at this point.

That feeling will fade as you read and do more. I do want to call back to something you said earlier, though:

I've always tried to be rational in an intuitive sort of way.

This is where you want to end up; it's one thing to talk a good game about biases, and another to understand them on the five second level. While reading through the sequences, it's helpful to try to turn the epiphanies into actions or reactions, rather than just abstract knowledge.

I did not think I was contributing anything positive to the world with my work.

If you are interested in putting your programming skills to work on rationality education, you might want to get to know some people at CFAR; there are a number of useful things that could exist but don't yet because no one has programmed them. (Here's an example of one of the useful things that does exist.)

Comment author: ExaminedThought 02 August 2013 12:58:45AM 2 points [-]

I do have a pair of Vibrams! The sprint model. Those and flip flops are all I wear.

I'm not sure how to turn most of the epiphanies into actions. But I try to think of examples of how myself or others have failed at a particular aspect of it. Is that what you mean by reactions? I'm the type of person to read it all as fast as possible and then go back and try to implement specific actions during a reread. Although some of the general frame of mind is already rubbing off on me I think.

Thank you for the suggestion about CFAR. I will be looking into it.

Comment author: Vaniver 02 August 2013 02:37:54AM 0 points [-]

But I try to think of examples of how myself or others have failed at a particular aspect of it. Is that what you mean by reactions?

Sort of. The main thing is identifying a situation that will trigger a behavior. For example, whenever I notice I'm the least bit confused, I say out loud "I notice I am confused." This is an atomic action that I can do out of habit, and which will make me much more likely to follow up on the confusion. Oftentimes, this will be something like saying "event is on Saturday the 25th," and then noticing that Saturday isn't the 25th. This is something I really ought to get to the bottom of, because thinking the event is on the wrong day will lead to missing the event, which is totally preventable at this point if I notice my confusion.

Most people have defaults against noticing this sort of thing, though (I know I definitely did, even knowing a lot of decision science and about baises). Having a specific plan of action makes it way easier to react the right way in the moment, and having a workaround for one bias is better than knowing about twenty biases.

I'm the type of person to read it all as fast as possible and then go back and try to implement specific actions during a reread.

This is a better approach, I think, but I'm leery of recommending it because enough people have trouble reading through the sequences one time that suggesting it two times seems like asking too much.

Comment author: SaidAchmiz 02 August 2013 02:44:44AM 2 points [-]

This is a better approach, I think, but I'm leery of recommending it because enough people have trouble reading through the sequences one time that suggesting it two times seems like asking too much.

I know this isn't true for everyone, but for me, Eliezer's writing is really fun to read; I've reread many of his posts just on that basis. The Sequences do have some dense parts, but for most parts, I couldn't tear myself away.

Comment author: [deleted] 02 August 2013 11:58:18AM 3 points [-]

I don't see a lot of female names around here

It's not like your username sounds obviously feminine either, so how confident you are about whether a given user (except the obvious ones, say lukeprog or NancyLebovitz) is male or female?

But yes, according to the last survey, only around 10% of the people here are women, and even fewer among the most prolific contributors.

Comment author: ExaminedThought 02 August 2013 06:14:36PM 0 points [-]

I wouldn't assume about the ones that aren't actual names. But I also wouldn't have guessed the number was as low as 10%!

Comment author: Kawoomba 02 August 2013 06:20:04PM *  3 points [-]

Well, given that LW is/was* predominantly appealing to STEM-types, with a focus on computer science-y topics (artificial intelligence), decision theory etc., it's no wonder that the gender gap here reflects the gender gap in e.g. computer science colleges:

Figures from the Computing Research Association Taulbee Survey indicate that less than 12% of Computer Science bachelor's degrees were awarded to women at US PhD-granting institutions in 2010-11. (Source)

Edit: * "was" because Harry Potter!

Comment author: ExaminedThought 02 August 2013 07:00:30PM *  1 point [-]

Digging through the survey, I'm surprised to see Myers Briggs types listed. I was wondering if LWers considered it to be pseudoscience before I even saw the question.

Comment author: noahpocalypse 03 August 2013 04:07:20AM 1 point [-]

I also prefer bare feet, though to a lesser extent. I hate wearing just socks, but I don't mind wearing worn tennishoes that bend easily.

Comment author: wedrifid 07 August 2013 07:57:58AM 1 point [-]

I've lied to my family that I've actually voted because they're horrified that someone wouldn't.

I applaud your pragmatic response to ridiculous social pressure.

Comment author: F_Csuy 02 August 2013 02:08:21PM 5 points [-]

My name is Forrest. I'm 20 and studying undergraduate Physics and Computer Science at the University of Maryland. About two years ago, one of my friends introduced me to HPMoR and I was instantly hooked. A few months ago, before the final plot arc came out, I decided I was tired of waiting for HJPEV and came here to learn about the Methods of Rationality themselves from the source. I spent a few months lurking, read many of the sequences, and now decided to actually go about making an account. So, here I am!

Comment author: [deleted] 02 August 2013 03:29:16PM 7 points [-]

Hello. My name's Graedon. I'm 16, and I've got absolutely no idea of what I'm doing.

First off, I probably ended up on this site the same way a lot of people did: through MoR. I started reading it for fun, but soon the cool sciency stuff started to appeal more than the cool magicy stuff. I followed the link to LessWrong.com, and here I am.

Lurking.

That's pretty much it.

Comment author: MondSemmel 02 August 2013 04:50:41PM *  7 points [-]

Hi! My name is Tobias. I'm from Munich in Germany, male, 24 years old, and currently doing a Master's degree in physics at LMU Munich. I'm doing okay to good in my studies, but I still struggle with procrastination in particular (though things have gotten better) and low motivation. In particular, while I like physics in the abstract, I don't particularly enjoy the reality of studying physics at a university. Most importantly, I'm totally unambitious, and not satisfied with that. I'll be finished with my studies in ~1.5h years, so I'm currently trying to plan what to do afterwards.

I first came upon Less Wrong when a friend of mine recommended HPMoR to me in ~11/2012. A while ago, I decided I'd use my current semester holidays to benefit from the resources and community on Less Wrong, and to find something genuinely useful to do in life. Any suggestions?

For instance, x-risk already sounds interesting, though I'm nowhere good enough at math to even consider MIRI research a valid option. Is there room for mortals anywhere in the broader field of x-risk reduction?

In a related question, do you have any ideas for topics of interest to e.g. transhumanists, which could be suited for a Master's or PhD thesis in physics, and for which finding a supervisor does not sound straight out impossible?

Basically, if you were in my position and had ~2 months to decide on a plan/goal/cause/short-term trajectory to maximize your impact in life (whatever that means), what would you do?

Considering my interest in the natural sciences, I guess I'd call myself an (anspiring?) epistemic rationalist. So far, I haven't had much success with instrumental rationalism though, considering my persisting problems with issues like procrastination or perfectionism. On the other hand, this year I finally managed to overcome 8+ years of sleeping issues by finally attacking the problem in what I would call a rational, comprehensive manner. (I will read the sequence The Science of Winning at Life next.)

I intend to read all the sequences eventually; so far, I've only read How to Actually Change Your Mind, The Map and the Territory, and Mysterious Answers to Mysterious Questions.

Comment author: shminux 02 August 2013 05:47:42PM *  -1 points [-]

In a related question, do you have any ideas for topics of interest to e.g. transhumanists, which could be suited for a Master's or PhD thesis in physics, and for which finding a supervisor does not sound straight out impossible?

Well, various incarnations of Many Worlds/Mathematical Universe/String theory landscapes/Boltzmann brains are popular both here and in many Physics circles. While I don't hold much stock in any of those, there are surely some tenure profs in physics departments around the world who would take a sucker grad student willing to spend 4-6 years on something like that.

Comment author: MondSemmel 02 August 2013 04:55:43PM 5 points [-]

[Meta comment: In the welcome post, the links to the open threads link to two different tags, with different time dates. This is confusing. One of them hasn't been updated since 10/2011. If you fix this, you might have to do the same in the template for creating new welcome threads. Also, I think the same issue exists elsewhere on the site, e.g. in the Less Wrong FAQ.]

Comment author: KnaveOfAllTrades 11 August 2013 01:38:20AM 1 point [-]

Thanks for the heads up. Post fixed. Template fixed. I've replaced the single, different links with two links, each pair covering Main and Discussion open threads. If anyone knows a way to use one link to get both Main and Discussion open threads, please comment here and PM me.

Comment author: metastable 04 August 2013 03:10:30AM 10 points [-]

Hi! I first saw LW as a node on a map of neoreactionary web sites. Which I guess is a pretty weird way to find it, since I'm not myself a neoreactionary and LW doesn't seem to fit the map. You have to stretch pretty far to connect some of those nodes.

Fortunately, I took a look at the Less Wrong community, and it's been really interesting to explore. I figured I should introduce myself, since I posted in another thread. I'm in my early 30's and I'm studying in the life sciences at the postgraduate level. I'm a Christian. I'm also a married father, and a veteran. So. Probably somewhat atypical (I peeked at the survey results.)

I'm excited by several of the big problems that seem to animate LW: minimizing cognitive bias day-to-day, optimizing philanthropy, and working through received ideology. I know zip about AI, but addressing existential risk is really interesting to me indirectly, as it relates to forecasting and mitigating mere catastrophes*, a challenge for wonks and technocrats and scientists (and everybody, of course). In fact, if anybody knows of LW'ers or other rationalists interested in policy problems of that nature I'd be super grateful for a pointer or a link.

In conclusion, I read ZeroHedge far too much, sometimes wear Vibrams, and am thrilled to meet all of you.

*is there a better word? My jargon is level 0.

Comment author: Nornagest 04 August 2013 03:35:49AM *  3 points [-]

I first saw LW as a node on a map of neoreactionary web sites [...] LW doesn't seem to fit the map. You have to stretch pretty far to connect some of those nodes.

That brings up some interesting questions. The last survey placed self-identified neoreactionaries as a very small percentage of LW readership (scroll down to "Alternate Politics Question"). Progressivism appears to be the most popular political philosophy around here, with libertarianism a strong competitor; nothing else is in the running.

That's not the first time I've heard LW referred to as a neoreactionary site, though; once might be coincidence, but twice needs explanation. With the survey in mind it's clearly not a matter of explicitly endorsed philosophy, so I'm left to assume that we're propagating ideas or cultural artifacts that're popular in neoreactionary circles. I'm not sure what those might be, though. It might just be our general skepticism of academically dominant narratives, but that seems like too glib an explanation to me.

Comment author: Kzickas 04 August 2013 04:34:09AM *  0 points [-]

The impression I got from looking at their graph is that a strong libertarian component is enough by itself. It wouldn't be the first time I've seen people consider libertarianism inherently very regressive.

Edit: Originally I assumed that it was accusing Less Wrong of being neoreactionary, but looking a bit around the site it looks like they might be praising it.

Comment author: Nornagest 04 August 2013 05:39:41AM 3 points [-]

I don't think that's a powerful enough explanation. Setting aside the differences between libertarianism and neoreaction, there are far more libertarian-leaning blogs than that graph can account for, and many of the missing ones are more popular than we.

Comment author: metastable 04 August 2013 06:25:32AM 0 points [-]

I agree.

It might be worth noting that in this thread, the other thread where we just crossed paths, there are two different posters who blog at other nodes in that graph.

Comment author: Viliam_Bur 04 August 2013 09:16:16PM *  3 points [-]

Could this be explained by the base rates?

Imagine a society with 10 neoreactionaries and 10000 liberals (or any other mainstream political group). Let's suppose that 5 of the neoreactionaries and 500 of the liberals read LessWrong.

In this society, neoreactionaries would consider LessWrong one of "their" websites, because half of them are reading it. Yet the LessWrong survey would show that neoreactionaries are just a tiny minority of its readers.

Comment author: Nornagest 05 August 2013 12:02:34AM 1 point [-]

That's a heck of a coincidence, but it would explain a perception among neoreactionaries. It wouldn't, however, explain perceptions among (to use your example) liberals; unless the latter spend a lot of time reading blogs from the former, they're probably going to be using an outside view, which would give them the same ratios we see in the survey. Out in the wild, I've seen the characterization coming from both sides.

Although the graph in the ancestor is from a neoreactionary blog.

Comment author: Kzickas 05 August 2013 12:24:07PM *  1 point [-]

While I'm not sure what "neoreactionary" refers to specifically there are lots of reasons that certain types of liberals see LessWrong as reactionary:

  • A somewhat strong libertarian component
  • Belief in evolutionary psychology
  • Anti-religous (or generally the belief that beliefs can be right or wrong)
  • LessWrong's more technical understanding of evidence is incompatible with standpoint theory and similar epistemic frameworks favored by some groups of liberals.
  • Those older discussions around PUA where it's presented in a pretty positive light
  • Glorification of the enlightenment.
Comment author: Vaniver 04 August 2013 09:46:53PM 1 point [-]

That's not the first time I've heard LW referred to as a neoreactionary site, though; once might be coincidence, but twice needs explanation. With the survey in mind it's clearly not a matter of explicitly endorsed philosophy, so I'm left to assume that we're propagating ideas or cultural artifacts that're popular in neoreactionary circles. I'm not sure what those might be, though. It might just be our general skepticism of academically dominant narratives, but that seems like too glib an explanation to me.

Viliam's explanation seems like a strong one to me, but doesn't explain the historical accident of (to use his made up numbers) half of neoreactionaries reading LW.

I suspect that LW has a vibe of "actually think through everything, question your implicit assumptions, and follow logic to its conclusion." The neoreactionary believes that doing so ends up at the neoreactionary position- even if that is true for only 1% of people, that leads to a 10X higher concentration of neoreactionaries at LW. At the very least, it seems that LW has a strong tendency to destroy strong political leanings, and especially affection for popular government-supporting narratives.

Comment author: telms 05 August 2013 01:22:15AM *  9 points [-]

Hi, everyone. My name is Teresa, and I came to Less Wrong by way of HPMOR.

I read the first dozen chapters of HPMOR without having read or seen the Harry Potter canon, but once I was hooked on the former, it became necessary to see all the movies and then read all the books in order to get the HPMOR jokes. JK Rowling actually earned royalties she would never have received otherwise thanks to HPMOR.

I don't actually identify as a pure rationalist, although I started out that way many, many years ago. What I am committed to today is SANITY. I learned the hard way that, in my case at least, it is the body that keeps the mind sane. Without embodiment to ground meaning, you get into problems of unsearchable infinite regress, and you can easily hypothesize internally consistent worlds that are nevertheless not the real world the body lives in. This can lead to religions and other serious delusions.

That said, however, I find a lot of utility in thinking through the material on this site. I discovered Bayesian decision theory in high school, but the texts I read at the time either didn't explain the whole theory or else I didn't catch it all at age 14. Either way, it was just a cute trick for calculating compound utility scores based on guesses of likelihood for various contingencies. The greatest service the Less Wrong site has done for me is to connect the utility calculation method to EMPIRICAL prior probabilities! Like, duh! A hugely useful tool, that is.

As a professional writer in my day job and student of applied linguistics research otherwise, I have some reservations about those of the Sequences that reference the philosophy of language. I completely agree that Searle believes in magic (aka "intentionality"), which is not useful. But this does not mean the Chinese Room problem isn't real.

When you study human language use empirically in natural contexts (through frame-by-frame analysis of video recordings), it turns out that what we think we do with language and what we actually do are rather divergent. The body and places in the world and other agents in the interaction all play a much bigger role in the real-time construction of meaning than you would expect from introspection. Egocentric bias has a HUGE impact on what we imagine about our own utterances. I've come to the conclusion that Stevan Harnad is absolutely correct, and that machine language understanding will require an AI ROBOT, not a disembodied algorithmic system.

As for HPMOR, I hereby predict that Harrymort is going to go back in time to the primal event in Godric's Hollow and change the entire universe to canon in his quest to, er, spoilers, can't say.

Cheers.

Comment author: Swimmer963 05 August 2013 02:10:06AM 0 points [-]

Welcome!

Without embodiment to ground meaning, you get into problems of unsearchable infinite regress, and you can easily hypothesize internally consistent worlds that are nevertheless not the real world the body lives in. This can lead to religions and other serious delusions.

Yeah. This, and the "existential angst" thing, seem to be common problems on LW, and I've never been sure why. I think that keeping yourself busy doing practical stuff prevents it from becoming an issue.

When you study human language use empirically in natural contexts (through frame-by-frame analysis of video recordings), it turns out that what we think we do with language and what we actually do are rather divergent. The body and places in the world and other agents in the interaction all play a much bigger role in the real-time construction of meaning than you would expect from introspection.

That's fascinating! What research has been done on this! I would totally be interested in reading more about it.

Comment author: telms 05 August 2013 02:38:02AM *  5 points [-]

Jurgen Streeck's book Gesturecraft: The manu-facture of meaning is a good summary of Streeck's cross-linguistic research on the interaction of gesture and speech in meaning creation. The book is pre-theoretical, for the most part, but Streeck does make an important claim that the biological covariation in a speaker or hearer across the somatosensory modes of gesture, vision, audition, and speech do the work of abstraction -- which is an unsolved problem in my book.

Streeck's claim happens to converge with Eric Kandel's hypothesis that abstraction happens when neurological activity covaries across different somatosensory modes. After all, the only things that CAN covary across, say, musical tone changes in the ear and dance moves in the arms, legs, trunk, and head, are abstract relations. Temporal synchronicity and sequence, say.

Another interesting book is Cognition in the Wild by Edwin Hutchins. Hutchins goes rather too far in the direction of externalizing cognition from the participants in the act of knowing, but he does make it clear that cultures build tools into the environment that offload thinking function and effort, to the general benefit of all concerned. Those tools get included by their users in the manufacture of online meaning, to the point that the online meaning can't be reconstructed from the words alone.

The whole field of conversation analysis goes into the micro-organization of interactive utterances from a linguistic point of view rather than a cognitive perspective. The focus is on the social and communicative functions of empirically attested language structures as demonstrated by the speakers themselves to one another. Anything written by John Heritage in that vein is worth reading, IMO.

EDIT: Revised, consolidated, and expanded bibliography on interactive construction of meaning:

LINGUISTICS

  • Philosophy in the Flesh, by George Lakoff and Mark Johnson

  • Women, Fire and Dangerous Things, by George Lakoff

  • The Singing Neaderthals, by Steven Mithen

CONVERSATION ANALYSIS & GESTURE RESEARCH

  • Handbook of Conversation Analysis, by Jack Sidnell & Tanya Stivers

  • Gesturecraft: The Manu-facture of Meaning, by Jurgen Streeck

  • Pointing: Where Language, Culture, and Cognition Meet, by Sotaro Kita

  • Gesture: Visible Action as Utterance, by Adam Kendon

  • Hearing Gesture: How Our Hands Help Us Think, by Susan Goldin-Meadow

  • Hand and Mind: What Gestures Reveal about Thought, by David McNeill

COGNITIVE PSYCHOLOGY

  • Symbols and Embodiment, edited by Manuel de Vega, Arthur M Glenberg, & Arthur C Graesser

  • Cognition in the Wild, Edwin Hutchins

Comment author: Swimmer963 07 August 2013 02:51:34PM 1 point [-]

Thanks! Neat.

Comment author: Bugmaster 05 August 2013 02:22:20AM *  4 points [-]

I've come to the conclusion that Stevan Harnad is absolutely correct, and that machine language understanding will require an AI ROBOT, not a disembodied algorithmic system.

I am not familiar with Stevan Harnad, but this sounds counterintuitive to me (though it's very likely that I'm misunderstanding your point). I am currently reading your words on the screen. I can't hear you or see your body language. And yet, I can still understand what you wrote (not fully, perhaps, but enough to ask you questions about it). In our current situation, I'm not too different from a software program that is receiving the text via some input stream, so I don't see an a priori reason why such a program could not understand the text as well as I do.

Comment author: SaidAchmiz 05 August 2013 02:36:42AM 3 points [-]

I assume telms is referring to embodied cognition, the idea that your ability to communicate with her, and achieve mutual understanding of any sort, is made possible by shared concepts and mental structures which can only arise in an "embodied" mind.

I am rather skeptical about this thesis as far as artificial minds go; somewhat less skeptical about it if applied only to "natural" (i.e., evolved) minds — although in that case it's almost trivial; but in any case don't know enough about it to have a fully informed opinion.

Comment author: Bugmaster 05 August 2013 03:43:02AM 2 points [-]

Oh, ok, that makes more sense. As far as I understand, the idea behind embodied cognition is that intelligent minds must have a physical body with a rich set of sensors and effectors in order to develop; but once they're done with their development, they can read text off of the screen instead of talking.

That definitely makes sense in case of us biological humans, but just like you, I'm skeptical that the thesis applies to all possible minds at all times.

Comment author: telms 05 August 2013 05:09:25AM 2 points [-]
Comment author: Bugmaster 05 August 2013 07:16:52AM 5 points [-]

I skimmed both papers, and found them unconvincing. Granted, I am not a philosopher, so it's likely that I'm missing something, but still:

In the first paper, Harnad argues that rule-based expert systems cannot be used to build a Strong AI; I completely agree. He further argues that merely building a system out of neural networks does not guarantee that it will grow to be a Strong AI either; again, we're on the same page so far. He further points out that, currently, nothing even resembling Strong AI exists anywhere. No argument there.

Harnad totally loses me, however, when he begins talking about "meaning" as though that were some separate entity to which "symbols" are attached. He keeps contrasting mere "symbol manipulation" with true understanding of "meaning", but he never explains how we could tell one from the other.

In the second paper, Harnad basically falls into the same trap as Searle. He lampoons the "System Reply" by calling it things like "a predictable piece of hand-waving" -- but that's just name-calling, not an argument. Why precisely is Harnad (or Searle) so convinced that the Chinese Room as a whole does not understand Chinese ? Sure, the man inside doesn't understand Chinese, but that's like saying that a car cannot drive uphill at 70 mph because no human driver can run uphill that fast.

The rest of his paper amounts to a moving of the goalposts. Harnad is basically saying, "Ok, let's say we have an AI that can pass the TT via teletype. But that's not enough ! It also needs to pass the TTT ! And if it passes that, then the TTTT ! And then maybe the TTTTT !" Meanwhile, Harnad himself is reading articles off his screen which were published by other philosophers, and somehow he never requires them to pass the TTTT before he takes their writings seriously.

Don't get me wrong, it is entirely possible that the only way to develop a Strong AI is to embody it in the physical world, and that no simulation, no matter how realistic, will suffice. I am open to being convinced, but the papers you linked are not convincing. I'm not interested in figuring out whether any given person who appears to speak English really, truly understands English; or whether this person is merely mimicking a perfect understanding of English. I'd rather listen to what such a person has to say.

Comment author: SaidAchmiz 07 August 2013 06:21:15AM 6 points [-]

Why precisely is Harnad (or Searle) so convinced that the Chinese Room as a whole does not understand Chinese ?

Haven't read the Harnad paper yet, but the reason Searle's convinced seems obvious to me: he just doesn't take his own scenario seriously — seriously enough to really imagine it, rather than just treating it as a piece of absurd fantasy. In other words, he does what Dennett calls "mistaking a failure of imagination for an insight into necessity".

In The Mind's Eye, Dennett and Hofstadter give the Chinese Room scenario a much more serious fictional treatment, and show in great detail what elements of it trigger Searle's intuitions on the matter, as well as how to tweak those intuitions in various ways. Sadly but predictably, Searle has never (to my knowledge) responded to their dissection of his views.

Comment author: wedrifid 07 August 2013 06:56:16AM 3 points [-]

In other words, he does what Dennett calls "mistaking a failure of imagination for an insight into necessity".

I like the expression and can think of times where I have looked for something that expresses this all-to-common practice simply.

Comment author: SaidAchmiz 07 August 2013 06:25:50PM 5 points [-]

Having now read the second linked Harnad paper, my evaluation is similar to yours. Some more specific comments follow.

Harnad talks a lot about whether a body "has a mind": whether a Turing Test could show if a body "has a mind", how we know a body "has a mind", etc.

What on earth does he mean by "mind"? Not... the same thing that most of us here at LessWrong mean by it, I should think.

He also refers to artificial intelligence as "computer models". Either he is using "model" quite strangely as well... or he has some... very confused ideas about AI. (Actually, very confused ideas about computers in general is, in my experience, endemic among the philosopher population. It's really rather distressing.)

Searle has shown that a mindless symbol-manipulator could pass the [Turing Test] undetected.

This has surely got to be one of the most ludicrous pronouncements I've ever seen a philosopher make.

people can do a lot more than just communicating verbally by teletype. They can recognize and identify and manipulate and describe real objects, events and states of affairs in the world. [italics added]

One of these things is not like the others...

Similar arguments can be made against behavioral "modularity": It is unlikely that our chess-playing capacity constitutes an autonomous functional module, independent of our capacity to see, move, manipulate, reason, and perhaps even to speak.

Well, maybe our chess-playing module is not autonomous, but as we have seen, we can certainly build a chess-playing module that has absolutely no capacity to see, move, manipulate, or speak.

Most of the rest of the paper is nonsensical, groundless handwaving, in the vein of Searle but worse. I am unimpressed.

Comment author: Bugmaster 08 August 2013 08:57:04PM *  0 points [-]

What on earth does he mean by "mind"?

Yeah, I think that's the main problem with pretty much the entire Searle camp. As far as I can tell, if they do mean anything by the word "mind", then it's "you know, that thing that makes us different from machines". So, we are different from AIs because we are different from AIs. It's obvious when you put it that way !

Comment author: SaidAchmiz 05 August 2013 02:33:05AM 1 point [-]

I completely agree that Searle believes in magic (aka "intentionality"), which is not useful. But this does not mean the Chinese Room problem isn't real.

I agree that Searle believes in magic, but "intentionality" is not magic (see: almost anything Dennett has written).

When you study human language use empirically in natural contexts (through frame-by-frame analysis of video recordings), it turns out that what we think we do with language and what we actually do are rather divergent. The body and places in the world and other agents in the interaction all play a much bigger role in the real-time construction of meaning than you would expect from introspection.

This sounds interesting. Could you expand on this?

Comment author: telms 05 August 2013 04:47:08AM 1 point [-]

A list of references can be found in an earlier post in this thread.

Comment author: TheOtherDave 05 August 2013 03:19:09AM 3 points [-]

Well, I certainly agree that there are important aspects of human languages that come out of our experience of being embodied in particular ways, and that without some sort of model that embeds the results of that kind of experience we're not going to get very far in automating the understanding of human language.

But it sounds like you're suggesting that it's not possible to construct such a model within a "disembodied" algorithmic system, and I'm not sure why that should be true.

Then again, I'm not really sure what precisely is meant here by "disembodied algorithmic system" or "ROBOT".

For example, is a computer executing a software emulation of a humanoid body interacting with an emulated physical environment a disembodied algorithmic system, or an AI ROBOT (or neither, or both, or it depends on something)? How would I tell, for a given computer, which kind of thing it was (if either)?

Comment author: telms 05 August 2013 04:23:16AM *  0 points [-]

Is a computer executing a software emulation of a humanoid body interacting with an emulated physical environment a disembodied algorithmic system, or an AI ROBOT (or neither, or both, or it depends on something)?

An emulated body in an emulated environment is a disembodied algorithmic system in my terminology. The classic example is Terry Winograd's SHRDLU, which made significant advances in machine language understanding by adding an emulated body (arm) and an emulated world (a cartoon blocks world, but nevertheless a world that could be manipulated) to text-oriented language processing algorithms. However, Winograd himself concluded that language understanding algorithms plus emulated bodies plus emulated worlds aren't sufficient to achieve natural language understanding.

Every emulation necessarily makes simplifying assumptions about both the world and the body that are subject to errors, bugs, and munchkin effects. A physical robot body, on the other hand, is constrained by real-world physics to that which can be built. And the interaction of a physical body with a physical environment necessarily complies with that which can actually happen in the real world. You don't have to know everything about the world in advance, as you would for a realistic world emulation. With a robot body in a physical environment, the world acts as its own model and constrains the universe of computation to a tractable size.

The other thing you get from a physical robot body is the implicit analog computation tools that come with it. A robot arm can be used as a ruler, for example. The torque on a motor can be used as a analog for effort. On these analog systems, world-grounded metaphors can be created using symbolic labels that point to (among other things) the arm-ruler or torque-effort systems. These metaphors can serve as the terminal point of a recursive meaning builder -- and the physics of the world ensures that the results are good enough models of reality for communication to succeed or for thinking to be assessed for truth-with-a-small-t.

Comment author: Bugmaster 05 August 2013 05:03:15AM 3 points [-]

However, Winograd himself concluded that language understanding algorithms plus emulated bodies plus emulated worlds aren't sufficient to achieve natural language understanding.

Ok, but is this the correct conclusion ? It's pretty obvious that a SHRDLU-style simulation is not sufficient to achieve natural language understanding, but can you generalize that to saying that no conceivable simulation is sufficient ? As far as I can tell, you would make such a generalization because,

Every emulation necessarily makes simplifying assumptions about both the world and the body that are subject to errors, bugs, and munchkin effects.

While this is true, it is also true that our human senses cannot fully perceive the reality around us with infinite fidelity. A child who is still learning his native tongue can't a rock that is 5cm in diameter from a rock that's 5.000001cm in diameter. This would lead me to believe that your simulation does not need 7 significant figures of precision in order to produce a language-speaking mind.

In fact, a colorblind child can't tell a red-colored ball from a green-colored ball, and yet colorblind adults can speak a variety of languages, so it's possible that your simulation could be monochrome and still achieve the desired result.

Comment author: TheOtherDave 05 August 2013 03:21:19PM *  5 points [-]

OK, thanks for clarifying.

I certainly agree that a physical robot body is subject to constraints that an emulated body may not be subject to; it is possible to design an emulated body that we are unable to build, or even a body that cannot be built even in principle, or a body that interacts with its environment in ways that can't happen in the real world.

And I similarly agree that physical systems demonstrate relationships, like that between torque and effort, which provide data, and that an emulated body doesn't necessarily demonstrate the same relationships that a robot body does (or even that it can in principle). And those aren't unrelated, of course; it's precisely the constraints on the system that cause certain parts of that system to vary in correlated ways.

And I agree that a robot body is automatically subject to those constraints, whereas if I want to build an emulated software body that is subject to the same constraints that a particular robot body would be subject to, I need to know a lot more.

Of course, a robot body is not subject to the same constraints that a human body is subject to, any more than an emulated software body is; to the extent that a shared ability to understand language depends on a shared set of constraints, rather than on simply having some constraints, a robot can't understand human language until it is physically equivalent to a human. (Similar reasoning tells us that paraplegics don't understand language the same way as people with legs do.)

And if understanding one another's language doesn't depend on a shared set of constraints, such that a human with two legs, a human with no legs, and a not-perfectly-humanlike robot can all communicate with one another, it may turn out that an emulated software body can communicate with all three of them.

The latter seems more likely to me, but ultimately it's an empirical question.

Comment author: telms 07 August 2013 05:45:29AM *  -1 points [-]

You make a very important point that I would like to emphasize: incommensurate bodies very likely will lead to misunderstanding. It's not just a matter of shared or disjunct body isomorphism. It's also a matter of embodied interaction in a real world.

Let's take the very fundamental function of pointing. Every human language is rife with words called deictics that anchor the flow of utterance to specific pieces of the immediate environment. English examples are words like "this", "that", "near", "far", "soon", "late", the positional prepositions, pronominals like "me" and "you" -- the meaning of these terms is grounded dynamically by the speakers and hearers in the time and place of utterance, the placement and salience of surrounding objects and structures, and the particular speaker and hearers and overhearers of the utterance. Human pointing -- with the fingers, hands, eyes, chin, head tilt, elbow, whatever -- has been shown to perform much the same functions as deictic speech in utterance. (See the work of Sotaro Kita if you're interested in the data). A robot with no mechanism for pointing and no sensory apparatus for detecting the pointing gestures of human agents in its environment will misunderstand a great deal and will not be able to communicate fluently.

Then there are the cultural conventions that regulate pointing words and gestures alike. For example, spatial meanings tend to be either speaker-relative or landmark-relative or absolute (that is, embedded in a spatial frame of cardinal directions) in a given culture, and whichever of these options the culture chooses is used in both physical pointing and linguistic pointing through deictics. A robot with no cultural reference won't be able to disambigurate "there" (relative to me here now) versus "there" (relative to the river/mountain/rising sun), even if physical pointing is integrated into the attempt to figure out what "there" is. And the problem may not be detected due to the illustion of double transparency.

This gets even more complicated when the world of discourse shifts from the immediate environment to other places, other times, or abstract ideas. People don't stop inhabiting the real world when they talk about abstract ideas. And what you see in conversation videos is people mapping the world of discourse metaphorically to physical locations or objects in their immediate environment. The space behind me becomes yesterday's events and the space beyond my reach in front of me becomes tomorrow's plan. Or I alway point to the left when I'm talking about George and to the right when I'm talking about Fred.

This is all very much an empirical question, as you say. I guess my point is that the data has been accumulating for several decades now that embodiment matters a great deal. Where and how it matters is just beginning to be sorted out.

Comment author: SaidAchmiz 07 August 2013 06:05:51AM 1 point [-]

Let's take the very fundamental function of pointing. Every human language is rife with words called deictics that anchor the flow of utterance to specific pieces of the immediate environment. English examples are words like "this", "that", "near", "far", "soon", "late", the positional prepositions, pronominals like "me" and "you" -- the meaning of these terms is grounded dynamically by the speakers and hearers in the time and place of utterance, the placement and salience of surrounding objects and structures, and the particular speaker and hearers and overhearers of the utterance. Human pointing -- with the fingers, hands, eyes, chin, head tilt, elbow, whatever -- has been shown to perform much the same functions as deictic speech in utterance. (See the work of Sotaro Kita if you're interested in the data). A robot with no mechanism for pointing and no sensory apparatus for detecting the pointing gestures of human agents in its environment will misunderstand a great deal and will not be able to communicate fluently.

Are you really claiming that ability to understand the very concept of indexicality, and concepts like "soon", "late", "far", etc., relies on humanlike fingers? That seems like an extraordinary claim, to put it lightly.

Also:

A robot with no mechanism for pointing and no sensory apparatus for detecting the pointing gestures of human agents in its environment will misunderstand a great deal and will not be able to communicate fluently.

"Detecting pointing gestures" would be the function of a perception algorithm, not a sensory apparatus (unless what you mean is "a robot with no ability to perceive positions/orientations/etc. of objects in its environment", which... wouldn't be very useful). So it's a matter of what we do with sense data, not what sorts of body we have; that is, software, not hardware.

More generally, a lot of what you're saying (and — this is my very tentative impression — a lot of the ideas of embodied cognition in general) seems to be based on an idea that we might create some general-intelligent AI or robot, but have it start at some "undeveloped" state and then proceed to "learn" or "evolve", gathering concepts about the world, growing in understanding, until it achieves some desired level of intellectual development. The concern then arises that without the kind of embodiment that we humans enjoy, this AI will not develop the concepts necessary for it to understand us and vice versa.

Ok. But is anyone working in AI these days actually suggesting that this is how we should go about doing things? Is everyone working in AI these days suggesting that? Isn't this entire line of reasoning inapplicable to whole broad swaths of possible approaches to AI design?

P.S. What does "there, relative to the river" mean?

Comment author: telms 07 August 2013 06:56:23AM *  -1 points [-]

Are you really claiming that ability to understand the very concept of indexicality, and concepts like "soon", "late", "far", etc., relies on humanlike fingers? That seems like an extraordinary claim, to put it lightly.

Yeah, I am advancing the hypothesis that, in humans, the comprehension of indexicality relies on embodied pointing at its core -- though not just with fingers, which are not universally used for pointing in all human cultures. Sotaro Kita has the most data on this subject for language, but the embodied basis of mathematics is discussed in Where Mathematics Comes From, by by Geroge Lakoff and Rafael Nunez . Whether all possible minds must rely on such a mechanism, I couldn't possibly guess. But I am persuaded humans do (a lot of) it with their bodies.

What does "there, relative to the river" mean?

In most European cultures, we use speaker-relative deictics. If I point to the southeast while facing south and say "there", I mean "generally to my front and left". But if I turn around and face north, I will point to the northwest and say "there" to mean the same thing, ie, "generally to my front and left." The fact that the physical direction of my pointing gesture is different is irrelevant in English; it's my body position that's used as a landmark for finding the target of "there". (Unless I'm pointing at something in particular here and now, of course; in which case the target of the pointing action becomes its own landmark.)

In a number of Native American languages, the pointing is always to a cardinal direction. If the orientation of my body changes when I say "there", I might point over my shoulder rather than to my front and left. The landmark for finding the target of "there" is a direction relative to the trajetory of the sun.

But many cultures use a dominant feature of the landscape, like the Amazon or the Missippi or the Nile rivers, or a major mountain range like the Rockies, or a sacred city like Mecca, as the orientation landmark, and in some cultures this gets encoded in the deictics of the language and the conventions for pointing. "Up" might not mean up vertically, but rather "upriver", while "down" would be "downriver". In a steep river valley in New Guinea, "down" could mean "toward the river" and "up" could mean "away from the river". And "here" could mean "at the river" while "there" could mean "not at the river".

The cultural variability and place-specificity of language was not widely known to Western linguists until about ten years ago. For a long time, it was assumed that person-relative orientation was a biological constraint on meaning. This turns out to be not quite accurate. So I guess I should be more nuanced in the way I present the notion of embodied cognition. How's this: "Embodied action in the world with a cultural twist on top" is the grounding point at the bottom of the symbol expansion for human meanings, linguistic and otherwise.

Comment author: SaidAchmiz 07 August 2013 01:39:19PM 1 point [-]

Yeah, I am advancing the hypothesis that, in humans, the comprehension of indexicality relies on embodied pointing at its core [...] Whether all possible minds must rely on such a mechanism, I couldn't possibly guess. But I am persuaded humans do (a lot of) it with their bodies.

But wait; whether all possible minds must rely on such a mechanism is the entire question at hand! Humans implement this feature in some particular way? Fine; but this thread started by discussing what AIs and robots must do to implement the same feature. If implementation-specific details in humans don't tell us anything interesting about implementation constraints in other minds, especially artificial minds which we are in theory free to place anywhere in mind design space, then the entire topic is almost completely irrelevant to an AI discussion (except possible as an example of "well, here is one way you could do it").

In most European cultures, we use speaker-relative deictics. If I point to the southeast while facing south and say "there", I mean "generally to my front and left". But if I turn around and face north, I will point to the northwest and say "there" to mean the same thing, ie, "generally to my front and left."

Er, what? I thought I was a member of a European culture, but I don't think this is how I use the word "there". If I point to some direction while facing somewhere, and say "there", I mean... "in the direction I am pointing".

The only situation when I'd use "there" in the way you describe is if I were describing some scenario involving myself located somewhere other than my current location, such that absolute directions in the story/scenario would not be the same as absolute directions in my current location.

In a steep river valley in New Guinea, "down" could mean "toward the river" and "up" could mean "away from the river". And "here" could mean "at the river" while "there" could mean "not at the river".

If this is accurate, then why on earth would we map this word in this language to the English "there"? It clearly does not remotely resemble how we use the word "there", so this seems to be a case of poor translation rather than an example of cultural differences.

In a number of Native American languages, the pointing is always to a cardinal direction. [...] The cultural variability and place-specificity of language was not widely known to Western linguists until about ten years ago. For a long time, it was assumed that person-relative orientation was a biological constraint on meaning.

Yeah, actually, this research I was aware of. As I recall, the Native Americans in question had some difficulty understanding the Westerners' concepts of speaker-relative indexicals. But note: if we can have such different concepts of indexicality, despite sharing the same pointing digits and whatnot... it seems premature, at best, to suggest that said hardware plays such a key role in our concept formation, much less in the possibility of having such concepts at all.

How's this: "Embodied action in the world with a cultural twist on top" is the grounding point at the bottom of the symbol expansion for human meanings, linguistic and otherwise.

Ultimately, the interesting aspect of this entire discussion (imo, of course) is what these human-specific implementation details can tell us about other parts of mind design space. I remain skeptical that the answer is anything other than "not much". (Incidentally, if you know of papers/books that address this aspect specifically, I would be interested.)

Comment author: Bugmaster 08 August 2013 09:13:34PM 3 points [-]

If the orientation of my body changes when I say "there", I might point over my shoulder rather than to my front and left.

I was able to follow this explanation (as well as the rest of your post) without seeing your physical body in any way. In addition, I suspect that, while you were typing your paragraph, you weren't physically pointing at things. The fact that we can do this looks to me like evidence against your main thesis.

Comment author: telms 11 August 2013 04:48:29AM *  -1 points [-]

I was able to follow this explanation (as well as the rest of your post) without seeing your physical body in any way. ... The fact that we can do this looks to me like evidence against your main thesis.

Ah, but you're assuming that this particular interaction stands on its own. I'll bet you were able to visualize the described gestures just fine by invoking memories of past interactions with bodies in the world.

Two points. First, I don't contest the existence of verbal labels that merely refer -- or even just register as being invoked without refering at all. As long as some labels are directly grounded to body/world, or refer to other labels that do get grounded in the body/world historically, we generally get by in routine situations. And all cultures have error detection and repair norms for conversation so that we can usually recover without social disaster.

However, the fact that verbal labels can be used without grounding them in the body/world is a problem. It is frequently the case that speakers and hearers alike don't bother to connect words to reality, and this is a major source of misunderstanding, error, and nonsense. In our own case here and now, we are actually failing to understand each other fully because I can't show you actual videotapes of what I'm talking about. You are rightly skeptical because words alone aren't good enough evidence. And that is itself evidence.

Second, humans have a developmental trajectory and history, and memories of that history. We're a time-binding animal in Korzybski's terminology. I would suggest that an enculturated adult native speaker of a language will have what amount to "muscle memory" tics that can be invoked as needed to create referents. Mere memory of a motion or a perception is probably sufficient.

"Oh, look, it's an invisible gesture!" is not at all convincing, I realize, so let me summarize several lines of evidence for it.

Developmentally, there's quite a lot of research on language acquisition in infants and young children that suggests shared attention management -- through indexical pointing, and shared gaze, and physical coercion of the body, and noises that trigger attention shift -- is a critical building block for constructing "aboutness" in human language. We also start out with some shared, built-in cries and facial expressions linked to emotional states. At this level of development, communication largely fails unless there is a lot of embodied scaffolding for the interaction, much of it provided by the caregiver but a large part of it provided by the physical context of the interaction. There is also some evidence from the gestural communication of apes that attests to the importance of embodied attention management in communication.

Also, co-speech gesture turns out to be a human universal. Congenitally blind children do it, having never seen gesture by anyone else. Congenitally deaf children who spend time in groups together will invent entire gestural languages complete with formal syntax, as recently happened in Nicaragua. And adults speaking on the telephone will gesture even knowing they cannot be seen. Granted, people gesture in private at a significantly lower rate than they do face-to-face, but the fact that they do it at all is a bit of a puzzle, since the gestures can't be serving a communicative function in these contexts. Does the gesturing help the speakers actually think, or at least make meaning more clear to themselves? Susan Goldin-Meadow and her colleagues think so.

We also know from video conversation data that adults spontaneously invent new gestures all the time in conversation, then reuse them. Interestingly, though, each reuse becomes more attentuated, simplified, and stylized with repetition. Similar effects are seen in the development of sign languages and in written scripts.

But just how embodied can a label be when gesture (and other embodied experience) is just a memory, and is so internalized that is is externally invisible? This has actually been tested experimentally. The Stroop effect has been known for decades, for example: when the word "red" is presented in blue text, it is read or acted on more slowly than when the word "red" is presented in red text -- or in socially neutral black text. That's on the embodied perception side of things. But more recent psychophysical experiments have demonstrated a similar psychomotor Stroop-like effect when spatial and motion stimulus sentences are semantically congruent with the direction of the required response action. This effect holds even for metaphorical words like "give", which tests as motor-congruent with motion away from oneself, and "take", which tests as motor-congruent with motion toward oneself.

I understand how counterintuitive this stuff can be when you first encounter it -- especially to intelligent folks who work with codes or words or models a great deal. I expect the two of us will never reach a consensus on this without looking at a lot of original data -- and who has the time to analyze all the data that exists on all the interesting problems in the world? I'd be pleased if you could just note for future reference that a body of empirical evidence exists for the claim. That's all.

Comment author: Bugmaster 11 August 2013 05:20:13AM *  2 points [-]

In our own case here and now, we are actually failing to understand each other fully because I can't show you actual videotapes of what I'm talking about.

What do you mean by "fully" ? I believe I understand you well enough for all practical purposes. I don't agree with you, but agreement and understanding are two different things.

First, I don't contest the existence of verbal labels that merely refer -- or even just register as being invoked without refering at all.

I'm not sure what you mean by "merely refer", but keep in mind that we humans are able to communicate concepts which have no physical analogues that would be immediately accessible to our senses. For example, we can talk about things like "O(N)", or "ribosome", or "a^n +b^n = c^n". We can also talk about entirely imaginary worlds, such as f.ex. the world where Mario, the turtle-crushing plumber, lives. And we can do this without having any "physical context" for the interaction, too.

All that is beside the point, however. In the rest of your post, you bring up a lot of evidence in support of your model of human development. That's great, but your original claim was that any type of intelligence at all will require a physical body in order to develop; and nothing you've said so far is relevant to this claim. True, human intelligence is the only kind we know of so far, but then, at one point birds and insects were the only self-propelled flyers in existence -- and that's not the case anymore.

Furthermore, your also claimed that no simulation, no matter how realistic, will serve to replace the physical world for the purposes of human development, and I'm still not convinced that this is true, either. As I'd said before, we humans do not have perfect senses; if physical coordinates of real objects were snapped to a 0.01mm grid, no human child would ever notice. And in fact, there are plenty of humans who grow up and develop language just fine without the ability to see colors, or to move some of their limbs in order to point at things.

Just to drive the point home: even if I granted all of your arguments regarding humans, you would still need to demonstrate that human intelligence is the only possible kind of intelligence; that growing up in a human body is the only possible way to develop human intelligence; and that no simulation could in principle suffice, and the body must be physical. These are all very strong claims, and so far you have provided no evidence for any of them.

Comment author: RichardKennaway 07 August 2013 10:45:36AM 5 points [-]

A robot with no mechanism for pointing and no sensory apparatus for detecting the pointing gestures of human agents in its environment will misunderstand a great deal and will not be able to communicate fluently.

If I am talking to you on the telephone, I have no mechanism for pointing and no sensory apparatus for detecting your pointing gestures, yet we can communicate just fine.

The whole embodied cognition thing is a massive, elementary mistake as bad as all the ones that Eliezer has analysed in the Sequences. It's an instant fail.

Comment author: Estarlio 07 August 2013 12:46:45PM 0 points [-]

Are misunderstanding more common over the telephone for things like negotiation?

Comment author: RichardKennaway 07 August 2013 01:47:17PM 1 point [-]

I don't know, but I doubt that the communication medium makes much difference beyond the individual skills of the people using it. People can use multiple modalities to communicate, and in a situation where some are missing, one varies one's use of the others to accomplish the goal.

In adversarial negotiations one might even find it an advantage not to be seen, to avoid accidentally revealing things one wishes to keep secret. Of course, that applies to both parties, and it will come down to a matter of who is more skilled at using the means available.

People even manage to communicate in writing!

Comment author: SaidAchmiz 07 August 2013 06:06:58PM 4 points [-]

The whole embodied cognition thing is a massive, elementary mistake as bad as all the ones that Eliezer has analysed in the Sequences. It's an instant fail.

Can you expand on this just a bit? I am leaning, slowly, in the same direction, and I'd like a bit of a sanity check on this claim.

Comment author: RichardKennaway 08 August 2013 01:21:44PM 8 points [-]

Firstly, I have no problem with the "embodied cognition" idea so far as it relates to human beings (or animals, for that matter). Yes, people think also with their bodies, store memories in the environment, point at things, and so on. This seems to me both true and unremarkable. So unremarkable as to hardly be worth the amount of thought that apparently goes into it. While it may be interesting to trace out all the ways in which it happens, I see no philosophical importance in the details.

Where it goes wrong is the application to AGI that says that because people do this, it is an essential part of how an intellgence of any sort must operate, and therefore a man-made intelligent machine must be given a body. The argument mistakes a superficial fact about observed intelligences for a fact about the mechanism whereby an intelligence of any sort must operate. There is a large and expanding body of work on making ever more elaborate robot puppets like the Nao, explicitly following a research programme of developing "embodied cognition".

I cannot see these projects as being of any interest. I would be a lot more interested in seeing someone build a human-sized robot that can run unsupported on two legs (Boston Dynamics' ATLAS is getting there), especially if it can run faster than a man while carrying a full military pack and isn't tethered to a power cable (not yet done). However, nothing like that is a prerequisite to AGI. I do hold a personal opinion, which I'm not going to argue for here, that if someone developed a simple method of solving the control problems of an all-terrain running robot, they might get from that some insight into how to get farther, such as an all-terrain running robot that can hunt down humans trying to avoid it. Of course, the Unfriendly directions that might lead are obvious, as are the military motivations for building such machines, or inviting people to come up with designs. Of course, these powers will only be used for Good.

Since the embodied approach has been around in strength since the 1980s, and can be found in Turing in 1950, I think it fair to say that if it worked beyond the toy projects that AGI attempts always produce, we would have seen it by now.

The deaf communicate without sound, the blind without sight, and the limbless without pointing hands. On the internet people communicate without any of these. It doesn't seem to hold anyone up, except in the mere matter of speed in the case of Stephen Hawking communicating by twitching cheek muscles.

Ah, no, the magic ingredient must be society! Cognition always takes place within society. Feral children are developmentally disabled for want of society. The evidence is clear: we must develop societies of AIs before they can be intelligent.

No, it's language they must have! AGIs cognition must be based on a language. So if we design the perfect language, AGI will be a snap.

No, it's upbringing they must have! So we'll design a robot to be initially like a newborn baby and teach it through experience!

No, it's....

No. The general form of all these arguments is broken.

Comment author: Document 08 August 2013 07:54:21PM *  1 point [-]

Since the embodied approach has been around in strength since the 1980s, and can be found in Turing in 1950, I think it fair to say that if it worked beyond the toy projects that AGI attempts always produce, we would have seen it by now.

This is where you lose me. Isn't that an equally effective argument against AGI in general?

Comment author: Bugmaster 08 August 2013 09:06:41PM 1 point [-]

I took RichardKennaway's post to mean something like the following:

"Birds fly by flapping their wings, but that's not the only way to fly; we have built airplanes, dirigibles and rockets that fly differently. Humans acquire intelligence (and language) by interacting with their physical environment using a specific set of sensors and effectors, but that's not the only way to acquire intelligence. Tomorrow, we may build an AI that does so differently."

Comment author: RichardKennaway 09 August 2013 12:37:33PM *  1 point [-]

Isn't that an equally effective argument against AGI in general?

"AGI in general" is a thing of unlimited broadness, about which lack of success so far implies nothing more than lack of success so far. Cf. flying machines, which weren't made until they were. Embodied cognition, on the other hand, is a definite thing, a specific approach that is at least 30 years old, and I don't think it's even made a contribution to narrow AI yet. It is only mentioned in Russell and Norvig in their concluding section on the philosophy of Strong AI, not in any of the practical chapters.

Comment author: Kawoomba 09 August 2013 03:27:53PM 0 points [-]

The "magic ingredient" may be a bridging of intuitions: an embodied AI which you can more naturally interact with offers more intuitive metrics for progress; milestones which can be used to attract funding since they make more sense intuitively.

Obviously you can build an AGI using only lego stones. And you can build an AGI "purely" as software (i.e. with variable hardware substrates). The steelman for pursuing embodied cognition would not be "embodiment is strictly necessary to build AGIs" (boring!), but that "given humans with a goal of building an AGI, going the embodiment route may be a viable approach".

I well remember that early morning in the CS lab, the better part of a decade ago, when I stumbled -- still half asleep -- into a sideroom to turn on the lights, only to stare into the eye of Eccerobot (in an earlier incarnation), which was visiting our lab. Shudder.

I used to joke that my goal in life would be to build the successor creature, and to be judged by it (humankind and me both). To be judged and to be found unworthy in its (in this case single) eye, and to be smitten. After all, what better emotional proof to have created something of worth is there than your creation judging you to be unworthy? Take my atoms, Adambot!

Comment author: TheOtherDave 07 August 2013 06:56:40PM *  1 point [-]

Sure, I agree that we make use of all kinds of contextual cues to interpret speech, and a system lacking awareness of that context will have trouble interpreting speech.For example, if I say "Do you like that?" to Sam, when Sam can't see the thing I'm gesturing to indicate or doesn't share the cultural context that lets them interpret that gesture, Sam won't be able to interpret or engage with me successfully. Absolutely agreed. And this applies to all kinds of things, including (as you say) but hardly limited to pointing.

And, sure, the system may not even be aware of that trouble... illusions of transparency abound. Sam might go along secure in the belief that they know what I'm asking about and be completely wrong. Absolutely agreed.

And sure, I agree that we rely heavily on physical metaphors when discussing abstract ideas, and that a system incapable of processing my metaphors will have difficulty engaging with me successfully. Absolutely agreed.

All of that said, what I have trouble with is your apparent insistence that only a humanoid system is capable of perceiving or interpreting human contextual cues, metaphors, etc. That doesn't seem likely to me at all, any more than it seems likely that a blind person (or one on the other end of a text-only link) is incapable of understanding human speech.

Comment author: Mitchell_Porter 07 August 2013 07:55:30AM 4 points [-]

The chief deficiency of embodiment philosophy-of-mind, at least among AIers and cognitivists, is that they constantly say "embodiment" when they should say "experience of embodiment". And when you put it that way, most of the magic leaches away and you're left facing the same old hard problem of consciousness. Meaning, understanding, intentionality are all aspects of consciousness. And various studies can show that body awareness is surprisingly important in the genesis and constitution of those things. But just having a material object governed by a hierarchy of feedback loops does not explain why there should be anyone home in that object - why there should be any form of awareness in, or around, or otherwise associated with that object.

Comment author: Bugmaster 09 August 2013 12:12:00AM *  2 points [-]

I sort of agree with you: if the "hard problem of consciousness" is indeed a coherent problem that needs to be solved, then what you say makes perfect sense. But I am not convinced that it's a problem worth solving. I don't care whether Mitchell_Porter is an entity that really, truly experiences consciousness, or whether it's only a "material object governed by a hierarchy of feedback loops", so long as Mitchell_Porter has interesting things to say, and can hold up his/her/its own end of the conversation.

Is there any reason why I should care ?

Comment author: Mitchell_Porter 09 August 2013 02:11:44AM 1 point [-]

Let's distinguish between superficial and fundamental ignorance. If you flip a coin, you may not know which way it came up until you look. This typifies what I will call superficial ignorance. The mechanics of a flat disk of metal, sent spinning in a certain way, is not an especially mysterious subject. Your ignorance of whether the coin shows head or tails does not imply ignorance of the essence of what just happened.

Fundamental ignorance is where you really don't know what's going on. The sun goes up and down in the sky and you don't know why, for a third of each day you're in some other reality where you don't remember the usual one, and so on. The situation with respect to consciousness is in this category.

It could be argued that you should care about any instance of fundamental ignorance, because its implications are unknown in a way that the implications of superficial ignorance are not. Who knows what further wonderful, terrible, or important facts it obscures? Then again, it could be argued that there's fundamental ignorance beneath every instance of superficial ignorance. Consider the spinning coin: we have a physical mechanics that can describe its motion: but why does that mechanics work?

Conversely, in the case of consciousness, there's an argument for complacency: I may not understand why brains are conscious, but human beings pretty consistently act in the ways that I tentatively regard as indicative of consciousness, and (I could say) in my dealing with them, it's how they behave which matters.

There are a few further reasons why someone may end up caring whether other people/beings are truly conscious or not. One is morality. I may consider it important to know (if only I could know), whether they really are happy or suffering, or whether they are just automata pantomiming the behaviors of happiness and suffering. Another is intellectual curiosity. Perhaps you just decide that you want to know, not because of the argument from the unknown significance of fundamental ignorance, but on a whim, or because of the cool satisfaction of grasping something abstract.

But perhaps the number-one reason that someone from this community should want to know, is that many people here anticipate that they personally will undergo transformations such as mind uploading. If you at least value your own consciousness, and not just your behaviors, then you have an interest in understanding whether a given transformation preserves consciousness or not.

Comment author: AndHisHorse 09 August 2013 02:40:05AM 0 points [-]

Would it be appropriate to say that superficial ignorance is factual (one does not know the particular inputs to the equations which govern the coin's movement) where fundamental ignorance is conceptual (one does not have a concept that the coin is governed by equations of motion)?

Comment author: Mitchell_Porter 13 August 2013 11:58:43AM 0 points [-]

I don't know.

Comment author: Bugmaster 09 August 2013 03:07:50AM *  2 points [-]

I think that you are unintentionally conflating two very different questions:

1). What is the mechanism that causes us to perceive certain entities, including humans, as possessing consciousness ?
2). Let's assume that there's a hidden factor, called "consciousness", that is sufficient but not necessary to cause us to perceive humans as being conscious. How can we test for the presence or absence of this factor ?

Answering (2) may help you answer (1), but (2) is unanswerable if the assumption you are making in it is wrong.

I personally see no reason to postulate the presence of some hidden, undetectable factor that causes humans to be conscious. I would love to know how is it exactly that human brains produce the phenomenon we perceive as "consciousness", but I'm not convinced that such a feature could only have a single possible implementation.

This is indeed important with respect to morality:

I may consider it important to know (if only I could know), whether they really are happy or suffering, or whether they are just automata pantomiming the behaviors of happiness and suffering.

If the presence of consciousness is unfalsifiable, then you can't know, and you're obligated to treat all entities that appear to be happy or suffering equally (for the purposes of making your moral decisions, that is). On the other hand, if the presence of consciousness is falsifiable, then tell me how I can falsify it. If you hand-wave the answer by saying, "oh, it's a hard problem", then you don't have a useful model, you've got something akin to Vitalism. It'd be like saying,

"Some suns are powered by fusion, and others are powered by undetectable sun-goblins that make it look like the sun is powered by fusion. Our own sun is powered by goblins. You can't ever detect them, but trust me, they're there".

Comment author: [deleted] 11 August 2013 03:33:36PM -1 points [-]

You defect in the Prisoner's Dilemma against a rock with “defect” written on it, defect in the PD against a rock with “cooperate” written on it, and cooperate in the PD against a copy of yourself. So, if you're ever playing PD against Mitchell_Porter, you want to know whether he's more like a rock or like yourself.

Comment author: Bugmaster 12 August 2013 12:14:52AM 1 point [-]

Right, but in order to figure out whether to cooperate with or defect against Mitchell_Porter, all I need to know is what strategy he is most likely to pursue. I don't need to know whether he's a "material object governed by a hierarchy of feedback loops" or a biological human possessed of "consciousness" or an animatronic garden gnome; I just need to know enough to find out which button he'll press.

Comment author: peirce 05 August 2013 06:58:26PM 4 points [-]

Hi, I first found this a while back site after googling something like "how to not procrastinate" and finding one of Eliezer's articles. I've been slowly working may way through the sequences ever since, and i think they are significantly changing my life.

I'm very interested in self improvement/ instrumental rationality type stuff. I've been using this summer to experiment with various projects: learning mediation, learning about different types of therapy to systematically overcome fears, learning about biases and some other stuff.. I'm currently messing around with a productivity/ organisation system whereby I allocate point to myself for good behaviours and deduct points for bad behaviours, and either give myself a reward or pay a penalty as part of a commitment contract depending on how many points I've scored (sometimes my self-improvement ideas get a bit obsessive..)

I've just finished secondary education, which was a mess, and so i'm now quite excited to have more control over my own learning. I've been very interested in rationality since I was young, and have been passionate about philosophy because of this. Though, after getting into this site i've been reading some pretty damaging criticisms of the study of philosophy (at least traditional philosophy and the content that seems to be taught in most universities), and now i'm beginning to question whether i'm really interested in philosophy, and if it is valuable to study, or whether what i'm really after is something more like cognitive science.

This leads me to a problem: I've been offered a place at Oxford University for a course of Philosophy and Psychology and I'm considering trying to change to just study psychology or psychology and linguistics. I'm in the process of familiarizing myself with the basics of all of these fields, and i'm writing letters to my old philosophy teachers with this articlehttp://www.paulgraham.com/philosophy.html attached to see how well the criticism can be answered. My problem is though that i'm at best a knowledgeable amateur in these subjects, and i'm finding it hard to make a decision about which subjects to study - I don't know what I haven't studied yet so I don't know how important it is for me to know. Any advice on this or generally how to make the decision would be much appreciated, especially if you are familiar with the UK univeristy system, especially if you have studied philosophy. My overall aim for my education is pretty well expressed by parts of less wrong - i want to become more rational, in both my beliefs and my actions (although i find the parts of less wrong about epistemology, self-improvement and anti-akrasia more relevant to this than the parts about AI, maths and physics).

Also, i found solved questions repository, but is there a standard place for problems which people need help solving - as if it exists it may be a better place for parts of this post...? Cheers

Comment author: ILikeLogic 10 August 2013 02:01:15AM 4 points [-]

Hi. I'm a 42yr old male, from the US and I've been aware of LessWrong for a few years now, stumbling across links to posts on LessWrong here and there in my web surfing travels. I've always been more or less a rationalist. I've been a self-identified atheist since high school. I've been a fan of Daniel Dennett for many years. I read 'Consciousness Explained' when it first came out many years ago and I've kept up reading interesting philosophy and science books since then. I've always enjoyed books that made sense out of previously mysterious phenomena. My feedly list has hundreds of blogs mostly in nutrition/psychology/economics and some sports (I'm a big sports fan, but prefer an analytical approach to that as well). In essence I'm the type of guy who likes this stuff.

I remember reading on here a few years ago some posts about a rationalist approach to self-help. I'm especially interested in that. I've always been an anxious and insecure person and if I can solve that problem the quality of my life will skyrocket. Having spent a fair amount of time reading the comment threads at LessWrong I'm pretty optimistic that I can find some folks here who are interested in discussing these things in the same way that I am. Frankly I take a much more reductionist approach to personal problems than most others and this seems like a place where I may find some people who may think similarly. Barring that I think I'll just enjoy reading and commenting here every so often.

Comment author: metastable 10 August 2013 04:25:48PM 1 point [-]

I enjoy the analytical side of sports, too. Do you follow sabermetrics and all its many children (e.g advanced statistics in basketball and hockey) or are you more interested in human performance optimization (powerlifting, HIT, barefoot running, etc.)? If the latter, does that connect to your reductionist approach to personal problems and concern with anxiety?

Comment author: ILikeLogic 11 August 2013 06:14:01AM 0 points [-]

I follow sabermetrics and its children. I was really into Bill James back in the day and still had a subscription to BaseballProspectus.com (this post is half-drunk so excuse typos please). My 2 favorie sports are hockey and baseball. Baseball analytics made its biggest advances years ago - now it seems like they are just refining but hockey is in the initial stages. I've been into possession stats for hockey more than any baseball stats for the past couple of years although I still wander on to baseballprospectus and fangraphs and read some of the posts every 2 or 3 weeks.. I'm not a big hoops fan but I really like the advanced stats they have and footballoutsiders is great too although I havent really gone into depth there. I'm also interested in the performance stuff. I .listen to superhumanradio regularly. He has really good interviews with scientists on a regular basis.

Comment author: Baisius 11 August 2013 02:22:25AM 7 points [-]

Hi. I'm Baisius. I came here, like most, through HPMOR. I've read a lot of the sequences and they've helped me reanalyze the things I believe and why I believe them. I've been lurking here for awhile, but I've never really felt I had anything to add to the site, content wise. That's changed, however - I just launched a blog. The blog is generally LW themed, so I thought it appropriate. I wouldn't ordinarily advertise for it, but I would particularly like some help on one of the problems I explored in my first post. (see footnote 3)

One of the things that's bothered me about PredictionBook, and one of the reasons I don't use it much, is that its analysis seems a bit... lacking. In the post, I tried to come up with a rigorous way of comparing sets of predictions to see which are more accurate. I did this by looking at the distribution of residuals (outcome - predicted probability) for a set of predictions. The odd thing was that when I looked at the variance, the inverse of the variance showed some very odd patterns. It's all there in the post, but if anyone who knows a bit more math than I do could explain it, I'd really appreciate it.

Comment author: Unnamed 11 August 2013 04:57:58AM 1 point [-]

Welcome!

For assessing prediction accuracy, are you familiar with scoring rules?

Comment author: Baisius 11 August 2013 10:59:27AM -1 points [-]

I wasn't thanks. I'll try to read that sometime when I get a chance. At first glance though, I'm unsure why you would want it to be logarithmic. I thought about doing it that way too, but you then you lose the meaning associated with average error, which I think is undesirable.

Comment author: rocurley 12 August 2013 07:57:56AM 1 point [-]

So, let's say you want a scoring rule with two properties.

You want it to be local: that is to say, all that matters is the probability you assigned to the actual outcome. This is in contrast to rules like the quadratic scoring rule, where your score is different depending on how the outcomes that didn't happen are grouped. Based on this assumption, I'm going to write the scoring rule as S(p), where S(p) is the score you get when you assign a probability p to the true outcome.

You also want it to play nicely with combining separate events. That is to say, if you estimate 10% of it being cloudy when it actually is, and 10% of it being warm outside when it actually is, you want your score to be the same as if you had assigned 1% to the correct proposition that it is warm and cloudy outside. More succinctly: S(p)+S(q)=S(pq).

If you add in the additional caveat that some scores are not 0, then you are forced by the above statement to a logarithmic scoring rule. Interestingly, you don't need to include the requirement that it be a proper scoring rule, although the logarithmic scoring rule is proper.

Comment author: darkwolf 11 August 2013 04:47:17AM *  12 points [-]

I'm a 17 year old female student in Singapore, currently in my last semester in high school. I've been lurking around this site for at least the past year, and have made my way through some of the beginning sequences. However, what really made me want to stick around was lukeprog's post on How To Be Happy. Funnily enough, I don't think I've deliberately taken up any of the suggestions, though I have realised that my slow path to extroversion over the past few years contributed significantly to my baseline happiness increasing, as has my recent focus on writing. I guess one could say that my focus when reading this site is instrumental rationality, or basically what can I glean from here to make my life the way I want it to be.

Recently, however, I've been unable to focus as much because a small part of my mind seems constantly devoted to panicking about college. I'm planning on studying computer engineering in university, and I'm fully confident that I will get into the two local universities of my choice. I'm aiming for US universities too, and getting into them is very important to me, because I'm gay. I'm well aware of Singapore's active scene in that regards, it's just that staying here for university means I'll be living in my parents' house for at least four more years, and actively lying/hiding from them what I do stresses me out greatly.

I've always been able to succeed academically even with this kind of stress, but trying to write college essays while panicking over the possibility of being stuck in this house trying to pretend that I'm not gay or an atheist is not very productive. Neither is the panic over whether my stats are good enough to get into the kind of universities that would justify my parents letting me go to the US.

I suppose one would note I've written very little about epistemic rationality, mostly because as fascinating and illuminating as I find it, I've often used reading about it as a distraction from my panic and doing work. I'm keeping my efforts focused on 'winning' right now. I'm not really sure I identify as a rationalist, as I don't feel competent enough to claim such a title. Right now, my goals are getting into university and trying to decrease my risk-aversion, as the latter has often prevented me engaging in social events that would improve my mood and/or stretch my social skills.

Comment author: John_Spickes 14 August 2013 11:30:57AM 6 points [-]

Hi LWers!

I'm a 37 year old male. I work from home as an engineer, primarily focusing on FPGA digital logic work and related C++, with a smattering of other things. I'm a father to two young children, and I live with my little family on a small farm in central Delaware. I've always been a cerebral sort of guy.

I can't remember exactly how I came to LW - I may have heard it mentioned in a YouTube video - but finding it felt somehow like coming home. The core sequences have become some of my favorite reading material. LW was my first exposure to many of the disciplines discussed here: cognitive psychology, evolutionary psychology, Bayesian reasoning, and so on.

I feel like I've discovered a treasure. I'd like to thank everyone who has participated in building this content - it has been extremely enriching to me. Thank you.

My kids are still very young, but I am already starting to think about how I can help them learn to think rationally. I see it as part of my job to help them become better than I am, and I can't help but think I would have benefited quite a lot if I had been exposed to the concepts that are discussed here much earlier in life. I'd like to figure out how to help, say, a five-year-old start on the path. This is something I expect to be putting a lot of thought and research into, and if I come up with something post-worthy I would be delighted to share it here.

I'm also a novice meditator. I have found Chade-Meng Tan's treatment in Search Inside Yourself to be a good fit for me. It seems to me that building mindfulness is likely to be very useful in improving my agency, among other things. Thus far I have been only marginally successful, with the largest gains coming in parenting, and particularly in the area of self-control.

I have been lurking for quite a while, but I hope to participate more in the conversation.

Comment author: Moss_Piglet 21 August 2013 01:00:32AM *  5 points [-]

Hey everyone, nice to finally join the party.

My name's Pat, I'm a 22 year old man studying biochemistry at the undergraduate level, and I've been an on-and-off lurker for at least the last five years. My two favorite animals are the platypus and the water bear, my favorite food is calamari and I love cheesy action movies un-ironically.

If I had to put together a narrative of how I became a rationalist and made it to this site, it would look something like this (1);

My parents were quite a bit smarter than they were emotionally stable or perceptive, so they raised me as an atheist while forgetting the somewhat-important step of not making non-existence sound utterly horrifying (2). From a fairly young age I had a nearly paralyzing fear of death, and being a smart arrogant kid I figured that if anyone ought to live forever it should be me. I remember on my twelfth birthday talking to a few of my friends and deciding that genetic modification would probably allow for practical immortality before brain uploading was developed. That thought led immediately to the next; that I would be the person to solve mortality forever. (Yeah, I was pretty childish back then.)

I had already been interested in science beforehand, and with a powerful drive like that spent an inordinate amount of time studying so that I could hit 'escape velocity' in my lifetime. Even as the fear evaporated later on and I became indifferent as to whether I lived or died the interest in biology remained and intensified, and overall it has served me well. The scientific method helped me nail down my more intuitive-associative style of thinking into a logical framework while my passion helped me set clear goals for the future.

But I wouldn't say I was really a rationalist until about a year or so ago, when three key events combined to shape me into the person I am now. The first was reading this site and hearing about Bayes Theorem for the first time in about 2008-2009, which helped me structure my understanding of science in a clearer way and for which I owe Mr Yudkowski a huge debt. The second was recovering from a severe depression caused by my anxiety disorder about a year later; unsurprisingly it's a lot easier to be rational when you are actually sane, not to mention that cognitive-behavioral therapy taught me more about biases and neurology than I had learned in years of logic or neuroscience courses. The third is that I started reading a lot of Nietzsche, which helped me clear up a lot of the distracting moral detritus I had rolling around in my head.

So today I'm a more-or-less stable and happy guy who's just gotten back into my field, trying to improve his life and the world. I'm primarily interested in genetics, nanotechnology (3), and transhumanism / eugenics, but really I'll read about anything which doesn't lean too heavily on pure math or religious evangelism.

Thanks for reading all this, and I look forward to getting to know all of you.

1 Technically, exactly like this. If you haven't noticed, I can be a bit of a pedant. 2 For a long time I thought of the idea of hell as comforting; as bad as eternal torture sounds, at least you're still there. 3 I've heard some fascinating things about the potential of deoxyribozymes as a substitute for proteins in terms of nanotech, which is great for lazy people like me because I'd like to be able to understand the folding of things I work with without having to take a supercomputer's word for it.

Comment author: Kavrae 21 August 2013 03:57:42PM 4 points [-]

Hello,

I am a 23 year old male named Corey, though I prefer to go by the alias Kavrae in any online discussions. This allows me to keep a persistent persona across all sites or games I may join. If you happen to come across this alias elsewhere, there is a high probability that it is the same person. Please be kind in judging such findings though, as I have gone through a bit of a mental overhaul in the last few months. I would also like to apologize in advance if this gets a little lengthy; that seems to be a trademark of my posts lately.

I should probably do a brief summary of myself before diving in to my personal rationality history.

My education began in a highly underdeveloped rural highschool. Low student standards and even lowing testing criteria seems to have set me up with delusions of superior intelligence. Such views were quickly dissolved in the followed two years at a Missouri university studying computer engineering and physics. To put it shortly, the first year thoroughly broke me and opened my eyes to how vast academia truly was. While harsh, it is something I'm now grateful for. Unfortunately, in a decision I very much regret, I cut my education short and did not earn any sort of degree due to outside events.

As a product of the previously mentioned events, I have been married for approximately 3 years with a 2 year old son. I'm proud to say that he is turning out to be exceptionally intelligent, particularly in the areas of symbol recognition and technology use. I certainly plan on teaching him what I can of rationality and science as young as possible in an attempt to make the next generation better than the current one.

I am currently a web application developer and have been doing so for approximately 2.5 years, with initial training in the form of a 6 week programming bootcamp plus trial-by-fire. While the total time spent is relatively short, I have equal experience with open source and DotNet managed solutions with no preference between the two. It may seem contradictory to my hobbies in the next section, but I would prefer a future position as a system architect or senior developer rather than some form of management. I believe this goes back to certain control issues than I'm discovering through introspection.

Much of my free time now is spent in multiplayer gaming; whether it be as a support player in various MOBAs or MMOs, or as a GM in local tabletop games (Shadowrun, Pathfiner, etc). The former set of gaming being one that I'm considering dropping in favor of martial arts or outside-of-work programming. In either case I tend to be the one that spends extensive hours pouring over rulebooks and theorycrafting sites then subjecting my players to lengthy summaries. In my hobbies I tend to find myself in positions of teaching, leadership, or simply high responsibility more often than not. Quite possibly another symptom of the control issues mentioned above.

I believe my introduction to rationality began in college during my second and third semesters, though I didn't realize it at the time. The combination of a base physics class and introductory logic changes my view of the world. Everything seemed much more controlled and calculable; whether I could do such calculations myself or not. Probabilities become very important for me at this time, though I now believe I often misused them. My second introduction to rationality came after I got married, in a series of events that I should have handled far better. The short version is that I improved my debate skills against an in-law that seems to embody every cognitive bias and fallacy I have read about thus far. This was where I learned about such fallacies and began to recognize just how ingrained they were in society, as well as develop a bit of cynicism towards mankind's mental habits (this feels like a bad phrasing. Suggestions on improvement?).

This brings us to the present. I came across LessWrong through HPMoR and have spent the last few months reading through the core sequences. I plan on doing so again soon, to ensure that I retain at least a fraction of what I've read. It has been quite the experience so far; updating so my beliefs that I have never questioned and improving concepts that I thought "good enough". I have also learned many lessons regarding how and when such knowledge should be used; often in painful or humbling ways.

I recognize that I have a very long ways to go in the ways of rationality and believe that joining in the discussions, rather than simply lurking, will get me there faster. To narrow the spectrum of the vast amounts of information to learn, I am attempting to focus on evolutionary psychology, cognitive biases, and logical fallacies. Thus far I have found them to be the most fascinating and useful.

Comment author: Kawoomba 21 August 2013 04:52:39PM 2 points [-]

To put it shortly, the first year thoroughly broke me and opened my eyes to how vast academia truly was.

I read physics fora for just that effect. Some of it could as well be an elaborate VXJunkies, for all I can tell.

Comment author: tom_cr 27 August 2013 10:48:49PM 6 points [-]

Hi folks

I am Tom. Allow me to introduce myself, my perception of rationality, and my goals as a rationalist. I hope what follows is not too long and boring.

I am a physicist, currently a post-doc in Texas, working on x-ray imaging. I have been interested in science for longer than I have known that 'science' is a word. I went for physics because, well, everything is physics, but I sometimes marvel that I didn't go for biology, because I have always felt that evolution by natural selection is more beautiful than any theory of 'physics' (of course, really it is a theory of physics, but not 'nominal physics').

Obviously, the absolute queen of theories is probability theory, since it is the technology that gives us all the other theories.

A few years ago, during my PhD work, I listened to a man called Ben Goldacre on BBC radio, and as a result stumbled onto several useful things. Firstly, by googling his name afterwards, I discovered that there are things called science blogs (!) and something called a 'skeptic's community.' I became hooked.

The next thing I learned from Goldacre’s blog was that I had been shockingly badly educated in statistics. I realized for example, that science and statistics are really the same thing. Damn, hindsight feels weird sometimes - how could I possibly have gone through two and a bit degrees in physics, without realizing this stupendously obvious thing? I started a systematic study.

Through the Bad Science blog, I also found my way to David Colquhoun’s noteworthy blog, where a commenter brought to my attention a certain book by a certain E.T. Jaynes. Suddenly, all the ugly, self-contradictory nonsense of frequentist statistics that I’d been struggling with (as a result of my newly adopted labors to try to understand scientific method better) was replaced with beauty and simple common sense. This was the most eye-opening period of my life.

It was also while looking through professor Colquhoun’s ‘recently read’ sidebar that I first happened to click on a link that brought me to some writing by one Dr. Yudkowsky. And it was good.

In accord with my long-held interest in science, I think I have always been a rationalist. Though I don't make any claims to be particularly rational, I hold rationality as an explicit high goal. Not my highest goal, obviously – rationality is an approach for solving problems, so without something of higher value to aim for, what problem is there to solve? What space left for being rational? I might value rationality ‘for its own sake,’ but ultimately, this means ‘being rational makes me happy’, and thus, as is necessarily so, happiness is the true goal.

But rationality is a goal, nonetheless, and a necessary one, if we are to be coherent. To desire anything is to desire to increase one’s chances of achieving it. Science (rationality) is the set of procedures that maximize one’s expectation to identify true statements about reality. Such statements include those that are trivially scientific (e.g. ‘the universe is between 13.7 and 13.9 billion years old’), and those that concern other matters of fact, that are often not considered in science’s domain, such as the best way to achieve X. (Thus questions that science can legitimately address include: How can I build an airoplane that won’t fall out of the sky? What is the best way to conduct science? How can I earn more money? What does it mean to be a good person?) Thus, since desiring a thing entails desiring an efficient way to achieve it, any desire entails holding rationality as a goal.

And so, my passion for scientific method has led me to recognize that many things traditionally considered outside the scope of science are in fact not: legal matters, political decisions, and even ethics. I realized that science and morality are identical: all questions of scientific methodology are matters of how to behave correctly, all questions of how to behave are most efficiently answered by being rational, thus being rational is the correct way to behave.

Philosophy? Yup, that too – if I (coherently) love wisdom, then necessarily, I desire an efficient procedure for achieving it. But not only does philosophy entail scientific method, since philosophy is an educated attempt to understand the structure of reality, there is no reason (other than tradition) to distinguish it from science – these two are also identical.

My goals as a rationalist can be divided into 3 parts: (1) to become more adept at actually implementing rational inference, particularly decision making, (2) to see more scientists more fully aware of the full scope and capabilities of scientific method, and (3) to see society’s governance more fully guided by rationality and common sense. Too many scientists see science as having no ethical dimension, and too many voters and politicians see science as having no particular role in deciding political policy: at best it can serve up some informative facts and figures, but the ultimate decision is a matter of human affairs, not science (echoing a religious view, that people are somehow fundamentally special, dating back to a time before anybody had even figured out that cleaning the excrement from your hands before eating is a good idea). I’m tired of democratically elected politicians making the same old crummy excuse of having a popular mandate - “How can I deny the will of the people?” - when they have never even bothered to look into whether or not their actions are in the best interests of the people. In a rational society, of course, there would be no question of evidence-based politics defying the will of the people: the people would vote to be governed rationally, every time.

Goal (1) I pursue almost wholly privately. Perhaps the Less Wrong community can help me change that. After my PhD, while still in The Netherlands, I tried to establish and market a short course in statistics for PhD students, which was my first effort to work on goal (2). This seemed like the perfect approach: firstly, as I mentioned, my own education (and that of many other physicists, in particular) on the topic of what science actually is, was severely lacking. Secondly, in NL, the custom is for PhD students to be sent for short courses as part of their education, but the selection of courses I was faced with was abysmal, and the course I was ultimately forced to attend was a joke – two days of listening to the vacuousness of a third-rate motivational speaker.

I really thought the dutch universities would jump at the chance to offer their young scientists something useful, but they couldn’t see any value in it. So I took the best bits of my short course, and made them into a blog, which also serves, to a lesser degree, to address goal (3).

As social critters, wanting the best for us and our kind, I expect that most of us in the rationalist community share a goal somewhat akin to my goal (3). Furthermore, I expect that more than any other single achievement, goal (3) would also dramatically facilitate goals (1) and (2), and their kin. Thus I predict that a reasoned analysis will yield goal (3), or something very similar, to be the highest possible goal within the pursuit of rationalism. The day that politicians consistently dare not neglect to seek out and implement the best scientific advice, for fear of getting kicked out by the electorate, will be the dawn of a new era of enlightenment.

Comment author: Vaniver 29 August 2013 11:51:39PM 0 points [-]

Welcome!

Where are you in Texas?

Comment author: tom_cr 30 August 2013 02:37:45AM 1 point [-]

Thanks for the welcome.

I'm in Houston.

Comment author: Gunnar_Zarncke 29 August 2013 09:38:56PM *  3 points [-]

Hi. I'm Gunnar. I'm from Germany. I'm lurking lesswrong since July 25th.

How did I become a rationalist? I always was. Or at least I continuously became.

I had a scientific interest as a child. My curiosity was satisfied by my parents with answers, experiments, construction toys and books, math courses and later boarding school (this was in germany when there was a hype on talent advancement).

I must have been eleven or twelve when I had one of the strongest aha moments I remember: The realization of the concept of continuous functions. That a relationship like 2x+1 can not only be applied to single numbers and tabulated but realizes continuous curves. All the possibilities hit me like a hammer: Movements, prices, all kinds of dependencies could be described arbitrarily fine.

That moment had a lasting effect on me. I always find myself wondering what lies between the known points. Between the extremes. In a way this has become part of my philosophy of seeing and valuing the in-between. Some higher level Goldilocks solution.

I read my fathers shelves of science and science fiction as a youth. I tend to absorb and accept 'facts' in books too easily. Luckily I have a skeptic friend to get me back down to earth.

During boarding school there was a significant transition from abstract mathematics to computer science which gave me significant insights into modeling, simulation, complex structures. And the feeling of power over the machine. Of course I later fell into the trap of conceiving my own super programming language operating system.

I remember being asked during boarding school (9th grade) about my best talent. I answered: My tolerance. I could understand almost any behavior. I couldn't necessarily empathize with it or feel it. But I knew it existed, was right for the person/persons acting and was in general part of life.

I didn't know then that I hadn't really experienced much of life - only read about it. And that real tolerance means not only to understand and connive but to accept and endure.

During university after absorbing computer science until soaked I finally broadend out to cognitive science (mind opener: 'explorations in the microstructure of cognition') and later social sciences (mind opener: 'judgement under uncertainty heuristics and biases'.

I learned about real life from and with my wife. Strong emotions, child education, hard work and more.

What did I think about all that I learned?

As a child I must have figured that everything can be understood - given enough time and effort.

I thought early and much about God and morality and spirituality.I wondered how God could fullfil his promises. How he could be the way he is – if he is. There was always doubt. There could be a God. And his promise could be real. But it also could be that this is all a fairy tale run amok in human brains searching for explanations where there are none. Which is right? It is difficult to put probabilities to stories. I see that I have slowly moved from 50/50 agnosticism to tolerent atheism.

I can hit small targets - especially if they are far away. And my objective is on healing, improvement. I admit that my utility function is centered on me, my family, my friends and 'social network' and fades out slowly toward society at large. I am not very altruistic to the public in general.I understand effective altruism. And I value it. But also I cannot go against my affection to my family and especially my four sons. That I got from my parents.

That's me. What do I expect of LW? What can you expect of me on LW? I'm not clear yet. I already knew much of what is on LW when I came here. But I enjoyed the crisp and detailed posts. Refreshing or deepening rationality never hurts. I esp. like EYs stories. They bring rationality 'to the masses'. I will definitely read hpmor to my sons when they are old enough.

I think I can enrich lesswrong with critical views on the singularity. I have some strong arguments and even empirical evidence that there might be inherent complexity limits to technology and cognition which essentially render super intelligence infeasible (I see UFAI as a risk nonetheless).

And then I have some ideas on AI which build on a synthesis of logic and neuronal (vague) models which I'd like to share and discuss.

Maybe I will also share life experience. It seems that I am fairly old for this community and can do something about the arrogance risk (which I myself feel too) and about life expectations.

Comment author: chaosmage 29 August 2013 09:54:41PM 0 points [-]

Willkommen! :-) Wo in Deutschland steckst Du denn?

Comment author: Gunnar_Zarncke 30 August 2013 06:36:51AM 0 points [-]

In Hamburg. Und da gehe ich auch nicht weg.

Comment author: shminux 29 August 2013 10:52:19PM 0 points [-]

I think I can enrich lesswrong with critical views on the sigularity. I have some strong arguments and even empirical evidence that there might be inherent complexity limits to technology and cognition which essentially render super intelligence infeasible (I see UFAI as a risk nonetheless).

Do go on...

Comment author: polymathwannabe 30 August 2013 03:15:30AM 3 points [-]

Hello everyone.

My name is Carlos. I'm 30 years old. I was born, and still live, in Colombia.

I excelled through elementary and high school until I crashed against the hard fact that my parents could not afford my college ambitions. At that time I cycled between wanting to study psychology, but also archaeology, but also chemistry, but also cinema. I wanted to know everything.

Then came a long, dark time while I crawled through the Business Management degree my parents made me go for. Worst years of my life, absolutely. But in the meantime, I devoted my spare time to my passion, and trained myself to become a better writer. I sort of freelanced for a local newspaper, then wrote some pieces for an online newspaper, and more recently won a national short story contest. I'm currently studying journalism and preparing a series of SF novels in Spanish.

The first spark of my rationalist tendencies came from one of the many books that were at my parents' house. It told creation myths from the Native South Americans, and I found those stories much more engaging, beautiful and surprising than anything the book of Genesis had to offer. It was always clear to me that such stories should not be taken seriously; the next logical step was to give the same treatment to Genesis and everything that attempted to present a just-so explanation for the universe.

Right now I'm only a terribly amateurish rationalist. I wasted a good part of my youth pursuing a degree that was of no interest to me, and even if I made enormous efforts to better myself in the skills that did matter to me (namely as a writer), sometimes I still can't go over the fact that most of my friends of my same age have already built successful careers pursuing their true passion in the time it took me to reverse my wrong path and begin walking my chosen one.

I won't comment much here. In my everyday life, people can keep silent for hours hearing me talk, but here I see it's obviously going to be different. I don't have much to offer here. You guys are the next level above me. I'm here mostly to listen, and learn.

Comment author: LearnFromObservation 01 September 2013 04:52:37AM 4 points [-]

Hi! I'm Ciara (pronounced like Keara-Irish spelling is very muh irrational!) I've actually been a member of less wrong for a little while-I discovered it through HPMOR. I've always liked academics, challenging books, and Harry Potter, so I joined Less Wrong. I am a little ashamed to admit that I was quite intimidated by the sheer intellect and extraordinary thoughts that came from so many members all around the world. So, I took a little break after starting with the basics of rationality and am now a very different, though still amateur rationalist, person. I live in MA, not far from MiT, and I'm hoping to attend a meet up sometime. I'm sixteen years old, and going into my junior year of highschool. Both of my parents are Irish, and I usually spend about a quarter of my year there with family, so I tend to use some bizzare expressions. I'm also a dancer; I participate in musical theatre and jazz principally. I'm an aspiring author currently some 30,000 odd words into my latest attempt at a novel. I'm trying to incorporate some rationality into the characters; although not rationalist genre, like HPMOR, I'm at least trying to ensure that no one is holding the idiot ball. I'm a little nervous about rejoining the rationalist community, but I hope that by, say, Newtonmas, that my rationality will have improved enough for me to start posting. Look forward to working with you!

Comment author: Darklight 02 September 2013 06:52:42PM *  3 points [-]

Hey Everyone,

So I've been lurking around this community for a while, but to be honest, I was/am rather intimidated by the sheer level of intellectual prowess of many of the bloggers here, so I have hesitated to post. But I've been feeling a bit overconfident lately, so here goes nothing.

Anyway, a little about myself, I'm a Master's student at a university in Canada. I did my undergrad in Computing specializing in Cognitive Science, and am currently doing a Masters in Computer Science, with a particular interest in the field of Machine Learning. I'm currently working on a thesis involving Neural Networks and Object Recognition.

I've been interested in rationality for a very long time, though I grew up in a charismatic Christian family and so it took some time in university to deprogram myself from fundamentalist beliefs. These days I would call myself a Christian Agnostic, to the extent that to be intellectually honest, I am agnostic about the existence of God and the supernatural, however, I still lean towards Christian values and ideals to the extent that I was influenced by them growing up, and it is my preferred religion to take, as Kierkegaard suggested, a Leap of Faith towards.

Nevertheless, I went through a recent phase of being more strongly Agnostic, and during that time, I rediscovered Utilitarianism as a possible moral philosophy to base my life around. I am, somewhat, obsessed with things like finding the meaning of life, justifying existence, and having a coherent moral philosophy with which one can justify all actions. Right now I am of the opinion that Utilitarianism does a better job of this than, say Kantianism, or Virtue Ethics, and also that Utilitarianism is actually compatible with a very liberal interpretation of Christianity that sees religion as a means of God/Benevolent A.I. time travellers to create the best of all possible worlds. Yes, I am suggesting that Christianity and all successful religions could be in part, Noble Lies created to further Utilitarian ends by the powers that be. Or they might be true, albeit as metaphors for primitive humans who could never understand a more literal explanation of reality. As an Agnostic, I don't pretend to know. I can only conjecture at the possibilities.

Regardless, I am of the opinion that if God exists, He actually serves the Greatest Good, the morality separate from God. And this morality is probably some kind of Eudaimonic Utilitarianism. And thus, I am interested also in serving this Greatest Good morality, if for no other reason that it would be doing the right thing, serving the interests of God if He exists, and serving the interests of the Greatest Good, regardless.

Note that this is not the reason why I ended up studying Cognitive Science and moving into a field of research that involves Artificial Intelligence. I actually chose Cognitive Science for silly reasons, such as the fact I didn't have to take first-year calculus if I switched from Software Design into Cognitive Science (a reasoning I would later regret when I ended up needing calculus to understand Probability Theory in Machine Learning >_>). But also because Cognitive Science is inherently more interesting and cool. And I decided in my final years of undergrad that I wanted to do research in some field that would really make a big difference in the world, and so I decided to focus my efforts on becoming a researcher in the field of Artificial Neural Networks. That is my current hope, my grand mission, to try to change the world through the research and development of this technology that most closely resembles the human mind, and which I am confident will lead the A.I. field in the future. Yes, I am a connectionist, who believes that duplicating the way the human brain generates perception and cognition are the key to an A.I. enabled future.

I suppose that will do for an introduction. I hope I haven't alienated anyone with my eccentric views. Cheers to my fellow computer scientists, A.I. researchers, and rationalists! :D

Comment author: Ichneumon 04 September 2013 06:57:37PM 6 points [-]

Hello! I'm a 19 year old woman in Washington state, studying microbiology as an undergraduate. I was introduced to the "scene" when a friend recommended HPMOR in high school. I was raised in an atheist household with a fairly strong value on science, so it was novel if not mind-blowing- but still encouraged me to think about the way I think, read some of the Sequences, and get into Sam Harris and Carl Sagan. At college I began reading the rest of Less Wrong, and some related sites, and began identifying as a rationalist.

(Well, let's be honest here- I also moved from a math-and-science-oriented high school to a very liberal college, where my social identity changed from "artsy and literary" to "science-y and analytic". I would be lying if I said that trying to live up to it wasn't a compelling factor!)

LW and 80,000 hours also motivated me to change several of my long-held beliefs (at the moment, I can think of immortality and, well, er, most areas of rationality, which I guess is expected), and re-evaluate my career plans- changing my intended focus from environmental research or emerging diseases, to neglected tropical diseases (if this happens to be anyone's area of expertise, I'd be interested to hear!)

Anyways, I've been integrating the website into my head for some time now, and, equipped with the moniker of my favorite family of wasp, think it's about time to (begin, very slowly, to) integrate my head into the website. Nice to be here!

Comment author: cousin_it 04 September 2013 07:36:01PM 2 points [-]

Welcome to LW!

Comment author: Goshawk 15 September 2013 08:01:49PM *  8 points [-]

Hi! I've been lurking non-intensely for a while. I'm currently reading the sequences, and they've given me a lot of food for thought. I have a couple of rationalist friends (including RobbBB) who have gotten me interested in rationalism. I'm also a big fan of HPMOR, which is by far the best fanfic I've ever read.

Anyway, I'm trying to become a research scientist in linguistics, so it seems best that for professional development, in addition to personal development, I learn how to think and recognize why I think I know the things I think I know etc. So far, I've mostly been squirming in embarrassment over the fallacious reasoning I've been engaged in my whole life, but I hope that I can move forward to more productive thinking.