Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Welcome to Less Wrong! (8th thread, July 2015)

13 Post author: Sarunas 22 July 2015 04:49PM
If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, how you came to identify as an aspiring rationalist or how you found us. You can skip right to that if you like; the rest of this post consists of a few things you might find helpful. More can be found at the FAQ.

 

A few notes about the site mechanics

To post your first comment, you must have carried out the e-mail confirmation: When you signed up to create your account, an e-mail was sent to the address you provided with a link that you need to follow to confirm your e-mail address. You must do this before you can post!

Less Wrong comments are threaded for easy following of multiple conversations. To respond to any comment, click the "Reply" link at the bottom of that comment's box. Within the comment box, links and formatting are achieved via Markdown syntax (you can click the "Help" link below the text box to bring up a primer).

You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it's part of what makes discussions on Less Wrong look different from those anywhere else on the Internet.

However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.) Take note when you're downvoted a lot on one topic, as it often means that several members of the community think you're missing an important point or making a mistake in reasoning— not just that they disagree with you! If you have any questions about karma or voting, please feel free to ask here.

Replies to your comments across the site, plus private messages from other users, will show up in your inbox. You can reach it via the little mail icon beneath your karma score on the upper right of most pages. When you have a new reply or message, it glows red. You can also click on any user's name to view all of their comments and posts.

All recent posts (from both Main and Discussion) are available here. At the same time, it's definitely worth your time commenting on old posts; veteran users look through the recent comments thread quite often (there's a separate recent comments thread for the Discussion section, for whatever reason), and a conversation begun anywhere will pick up contributors that way.  There's also a succession of open comment threads for discussion of anything remotely related to rationality.

Discussions on Less Wrong tend to end differently than in most other forums; a surprising number end when one participant changes their mind, or when multiple people clarify their views enough and reach agreement. More commonly, though, people will just stop when they've better identified their deeper disagreements, or simply "tap out" of a discussion that's stopped being productive. (Seriously, you can just write "I'm tapping out of this thread.") This is absolutely OK, and it's one good way to avoid the flamewars that plague many sites.

EXTRA FEATURES:
There's actually more than meets the eye here: look near the top of the page for the "WIKI", "DISCUSSION" and "SEQUENCES" links.
LW WIKI: This is our attempt to make searching by topic feasible, as well as to store information like common abbreviations and idioms. It's a good place to look if someone's speaking Greek to you.
LW DISCUSSION: This is a forum just like the top-level one, with two key differences: in the top-level forum, posts require the author to have 20 karma in order to publish, and any upvotes or downvotes on the post are multiplied by 10. Thus there's a lot more informal dialogue in the Discussion section, including some of the more fun conversations here.
SEQUENCES: A huge corpus of material mostly written by Eliezer Yudkowsky in his days of blogging at Overcoming Bias, before Less Wrong was started. Much of the discussion here will casually depend on or refer to ideas brought up in those posts, so reading them can really help with present discussions. Besides which, they're pretty engrossing in my opinion. They are also available in a book form.

A few notes about the community

If you've come to Less Wrong to  discuss a particular topic, this thread would be a great place to start the conversation. By commenting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.

If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation; we can fit many more welcomes onto each thread if fewer of them sprout 400+ comments. (To do this: click "Create new article" in the upper right corner next to your username, then write the article, then at the bottom take the menu "Post to" and change it from "Drafts" to "Less Wrong Discussion". Then click "Submit". When you edit a published post, clicking "Save and continue" does correctly update the post.)

If you want to write a post about a LW-relevant topic, awesome! I highly recommend you submit your first post to Less Wrong Discussion; don't worry, you can later promote it from there to the main page if it's well-received. (It's much better to get some feedback before every vote counts for 10 karma—honestly, you don't know what you don't know about the community norms here.)

Alternatively, if you're still unsure where to submit a post, whether to submit it at all, would like some feedback before submitting, or want to gauge interest, you can ask / provide your draft / summarize your submission in the latest open comment thread. In fact, Open Threads are intended for anything 'worth saying, but not worth its own post', so please do dive in! Informally, there is also the unofficial Less Wrong IRC chat room, and you might also like to take a look at some of the other regular special threads; they're a great way to get involved with the community!

If you'd like to connect with other LWers in real life, we have  meetups  in various parts of the world. Check the wiki page for places with regular meetups, or the upcoming (irregular) meetups page. There's also a Facebook group. If you have your own blog or other online presence, please feel free to link it.

If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter

A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.

A list of some posts that are pretty awesome

I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:

More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.

Welcome to Less Wrong, and we look forward to hearing from you throughout the site!

 

Once a post gets over 500 comments, the site stops showing them all by default. If this post has 500 comments and you have 20 karma, please do start the next welcome post; a new post is a good perennial way to encourage newcomers and lurkers to introduce themselves. (Step-by-step, foolproof instructions here; takes <180seconds.)

If there's anything I should add or update on this post (especially broken links), please send me a private message—I may not notice a comment on the post.

Finally, a big thank you to everyone that helped write this post via its predecessors!

Comments (270)

Comment author: Salokin 24 July 2015 11:22:53PM 14 points [-]

Hello from Spain! I first found about LW after reading a post about Newcomb's problem and the basilisk last summer. A week after that I found HPMOR and I've been reading and lurking for this whole year. It's been amazing to see how there are other people with ideas like transhumanism and who are trying to become systematically better.

I decided to post here for the first time because I recently atended a CFAR workshop and realized that I could actually help in building a better community. I'm currently translating RAZ to Spanish and hope to create a rationality community in Madrid.

Some other things about me: * I'm currently studying Physics at Cambridge but I'm thinking of going into applied Maths and probably into computer science. (I'm very interested in AI risk) * I'm trying to find the best way to build healthy relationships and communities of people that help each other be better. (After my experience at CFAR I felt like I'm missing something amazing by not being in an environment like the Bay Area and want to recreate that.)

And that's it! You're all amazing for being part of something like this, hope we can make it even better all together! :)

Comment author: mrexpresso 25 July 2015 12:27:56AM 3 points [-]

welcome to LW! Just a question when you think you will finish your translation?

Comment author: Salokin 26 July 2015 10:06:26PM 5 points [-]

Thank you! :) I'm planning on finishing the first book (The map and the territory) by October but it will probably take longer as I'm not very consistent with my work. The first sequence (Predictably wrong) should be finished this week if I keep my current pace. I'm publishing it here: https://cognonomicon.wordpress.com/ (Everything is in Spanish) I'd appreciate any comments, and if you think that someone you know would benefit from reading rationality in spanish it would be great if you shared it ^^

Comment author: Pancho_Iba 28 July 2015 02:14:28PM 2 points [-]

I'd gladly read and criticize your translations if you want me to, but it will have to wait until after my topology exam next week. If you want me to do it, please remind me to do so ten days from now or so, since I will most probably forget about it.

Comment author: Pancho_Iba 24 July 2015 01:53:34AM 14 points [-]

Regards from Argentina,

Great post. I had started reading through this site randomly while I got more and more into HPMOR, which a friend recommended, and having a little list of posts to start will most probably prove helpful.

I would like to mention that the thing about this community I found the most astonishing was a comment that read something like "Edit: After reading some responses I've changed my mind and this comment no longer respresents my beliefs." I did not even know that it was possible for a human being to be so greatful and humble upon being proven wrong. And humility is something I most definitely need to learn, and I suspect I will be able to do so here. In fact, I already did, for I acknowledged the fact that someone outside my field (pure math, until recently) has something to teach me. Yes, I am (was?) THAT arrogant in a deep level, but here and now I just feel like a child, craving to learn the art of rationality.

Thank you all for what this site constitutes!

Comment author: Viliam 28 July 2015 08:30:01AM 5 points [-]

To me it feels easier to admit mistakes in an environment which does not punish admitting mistakes by loss of status. Where people cooperate to find the truth, instead of competing for image of infallibility.

Just saying that how one reacts on being shown errors is partially a function of their personality, but also partially a function of their environment. Changing the environment can help, although sometimes bad habits remain.

Comment author: Pancho_Iba 28 July 2015 02:04:32PM 3 points [-]

I quite agree, but now I'm wondering how could I change my own environment -not by replacing it, but by changing people's reactions- . It seems the responsability to do so lays upon my shoulders since I am the one who intends to live differently. Do you believe it'd be right to attempt to change people's reactions (if I knew a way), or should I acknowledge the possibility that they are just happy the way they are, and should just let them be?

Comment author: Viliam 28 July 2015 03:00:47PM *  3 points [-]

should I acknowledge the possibility that they are just happy the way they are

They probably are. Also, even if hypothetically becoming super rational should be an improvement for everyone, your ability to change them is limited, and it's uncertain whether that degree of change you could realistically achieve would be an improvement.

Unless you have superior manipulation skills, I believe it is extremely difficult to change people, if they don't want to. You push; they welcome the challenge and push hard in the opposite direction. Unfortunately, defending your own opinion, however stupid it is, is a favorite hobby of too many otherwise intelligent people. It could be a very frustrating experience for you, and an enjoyment for them.

At least my experiments in this area seem hugely negative. If people don't want to be rational, you are just giving them more clever arguments they can use in debates.

I hate to admit it, but "people never change" seems to be a very good heuristic, even if it is not literally true. (I hate it because of the outside view it provides for my own attempts at self-improvement. That's why I usually say "people never change unless they want to", but the problem is, wanting to change, and declaring that you want to change, are two different things.)

Also, I noticed that when you are trying to change, many people around you get anxious and try to bring you back to the "old you". If you want to change your own behavior, it is easier with completely new people, who don't know the "old you", and accept your new behavior as your standard one.

Comment author: Pancho_Iba 29 July 2015 02:42:32PM 2 points [-]

I know it would be hard, and most likely nearly impossible to change people without a very good idea very well executed, but perhaps a tiny possibility is reason enough to attempt to do it nonetheless. I wish to take your advice on trying to change myself among new people, and so I ask if you have any suggestion on a particular environment on which to try to do so.

Comment author: Viliam 30 July 2015 12:01:08PM 1 point [-]

The obvious new environment is the nearest LW meetup, if available. Otherwise... I don't know, maybe some public lectures.

(I am not the right person to ask about meeting new people. My own social sphere is very small.)

Comment author: CCC 28 July 2015 02:25:22PM 2 points [-]

now I'm wondering how could I change my own environment -not by replacing it, but by changing people's reactions-

People try to do that all the time. One of the best ways is to simply ask other people to change their reactions, and explain why - some people will listen (especially if you point out how the new environment will benefit them as well) while others won't. (Mind you, even the ones that listen will probably be slow to change their reactions... habits are not easily broken)

I'd also suggest, at the same time, changing your reactions to match your preferred environment; give everyone around you an example to follow.

If you have a position of authority (e.g. a university lecturer in a classroom) you could even use that authority to mandate how students are allowed to react - again, it would help to point out how the ability to change your mind is helpful to the students.

Do you believe it'd be right to attempt to change people's reactions

I think that it can be right to attempt to change peoples' reactions, if that change is to their benefit and the means employed to effect the change are ethical (i.e. ask them to change, don't put a gun to their head and force them to change).

Comment author: boatner 08 October 2015 03:08:47PM 13 points [-]

Howdy All!

I’m a post middle-aged, impressively moustachioed dude from Texas, now living in Wisconsin. I moved up here recently, following the work, and now have a fine job in a surprising career path. See, I recently took a couple degrees in Mathematics (which I capitalize out of love, grammar be damned!) hoping to be a teacher for the rest of my time. It turns out, that was not such a good move for me and I was fortunate to receive an offer to get back into private-sector IT. I am now happily managing UNIX systems for a biggish software company here in the tundra.

I’ve been consuming the sequences and lurking in the forum (and newly the Slack cahtrooms) for several weeks. I have no recollection of how I found the site; StumbleUpon would be my first guess, though the xkcd forum is nearly as likely. As I read through the LW site I am struck by the quality of discourse, which is high even among those who disagree.

I am motivated to fill in some gaps in my own thinking on various issues of interest and importance. With the exception of my atheism, I don’t have many strongly held opinions (though at times I do seem to lean quite a ways over on some of them).

So, how did I become a rationalist? Well. Hmmm. I got pulled into a youth cult in high school. At a rally (or whatever) I was implored by a zealot on stage to “seek the truth”. I realized in hindsight he probably meant something other than that, like: “listen to me and read the bible and there’s your source of truth”. But I took him at his word. I looked at other religions and started taking philosophy courses. I talked to people who held beliefs different from my own. I dug in and studied issues of morality, politics, aesthetics, and more. Gradually I started to realize that I didn’t believe any of the ideas pushed at me by organized religion. I remember questioning what it means to “believe” and concluding that I simply don’t believe in any of the gods other people claim exist.

At one point, back in my late teens, I was a bible thumping (literally and figuratively), charismatic, evangelical prophet of christ. A few years later I was openly secular, having still not fully grokked the scope of the words “atheist” and “agnostic”.

These days I am still openly secular, and when I get to know you, I’ll let on that I’m a gnostic atheist, and perfectly happy to Taboo both words (as I understand that phrase), preferably over good dark beer on tap and a basket of deep-fried cheese curds.

I am hesitant to admit that one of my principle interests lately is politics. While I support (and adore) the idea that politics is the mind killer, I can’t shake a notion that we, the folks who strive to be less wrong, should be involved in the larger discussion. If there’s a subset of human endeavor that really needs an IV drip of less wrongness, it’s politics.

Now I’ve found this part of the webs, I am fair sure I’ll continue to spend more time here than I ought.

Comment author: polymathwannabe 08 October 2015 10:48:34PM 2 points [-]

Be welcome, sir.

Comment author: jordansparks 31 July 2015 02:21:20PM 13 points [-]

Hi, my name is Jordan Sparks, and I'm the Executive Director of Oregon Cryonics. I work very hard every day to improve cryonics technology and to attract potential cryonics clinicians.

Comment author: Yaacov 26 July 2015 04:57:04AM *  13 points [-]

Hi LW! My name is Yaacov, I've been lurking here for maybe 6 months but I've only recently created an account. I'm interested in minimizing human existential risk, effective altruism, and rationalism. I'm just starting a computer science degree at UCLA, so I don't know much about the topic now but I'll learn more quickly.

Specific questions:

What can I do to reduce existential risk, especially that posed by AI? I don't have an income as of yet. What are the best investments I can make now in my future ability to reduce existential risk?

Comment author: endoself 27 July 2015 09:48:37PM 4 points [-]

Hi Yaacov!

The most active MIRIx group is at UCLA. Scott Garrabrant would be happy to talk to you if you are considering research aimed at reducing x-risk. Alternatively, some generic advice for improving your future abilities is to talk to interesting people, try to do hard things, and learn about things that people with similar goals do not know about.

Comment author: Squark 27 July 2015 07:02:44PM 4 points [-]

Hi Yaacov, welcome!

I guess that you can reduce X-risk by financing the relevant organizations, contributing to research, doing outreach or some combination of the three. You should probably decide which of these paths you expect to follow and plan accordingly.

Comment author: Viliam 28 July 2015 07:59:54AM 3 points [-]

If you choose the path of trying to make a lot of money and supporting the organizations who do the research, 80000 hours can help.

If you choose to contribute by doing the research, you can start by reading what's already done.

Comment author: chalime 25 July 2015 06:19:30AM *  13 points [-]

Hello LW!

Been lurking for about three years now- it’s time to at least introduce myself. Plus, I want to share a little about my current situation (work problems), and get some feedback on that. I’ll try and give a balanced take, but remember I’m talking about myself here…

First, for background, I’m 23, graduated about a year and a half ago with degrees in finance, accounting, and economics (I can sit still and take tests), and I also played basketball in college (one thing I can definitively say I’m good at is dribbling a basketball).

Brief Intellectual Journey
I didn’t care much about anything besides sports until I got to college. Freshmen year, I took a micro class and found it interesting, so I went online and discovered Marginal Revolution. I’ve been addicted to the internet ever since.

It started with the George Mason econ guys (Kling, Caplan, Roberts—that’s my bias), then I got interested in the psychology behind our beliefs and our actions (greatest hits being The Righteous Mind (Haidt), Thinking Fast and Slow (Kahneman), Mark Manson’s blog, Paul Graham’s blog). Somewhere during that time I stumbled across Lesswrong, SSC, HPMOR, and the rest of the rationality blogosphere, and it’s all just amazing. I love it but the downside is that I probably spend too much time reading instead of doing something more challenging.

The Big Three (EA, job/career, religion)
Right now, these three are overwhelming everything else, and I want to talk about them. First the easy one, religion. I am not religious, and that fact has caused me significant strife. I’ve lost an important relationship, become less close with my family (I’m in the closet- can’t bring myself to tell my mom), and generally feel kind of isolated because everyone I know seems to be religious and I struggle to look past that Important difference of opinion.

EA
I admire the EA movement and everyone involved. My base belief is that I do not need a lot of money to live on, and there are many people/causes that could make better use of the extra than me. I do have a high degree of uncertainty on what the best cause is, but I’ve simply been deferring those questions to GiveWell and I’m ok with that arrangement. So that’s the vision, but what about the execution?

Not great. While I did donate a pretty significant amount (for me) at the beginning of this year, I’ve stopped sending any money. The current problem is the uncertainty around where my income is going to come from in the future, as well as the overall unenjoyable experiences that all of my office type jobs have been to date. Those experiences make me want to save as much as possible, so I can be free to spend time how I want.

Let’s talk about my current job, and how utterly crazy it is. I don’t have anything lined up to do after this, but I don’t know how much longer I can hang on- it’s that bad. I try to stay upbeat about it but I know it’s only a matter of time (in my mind as of now, if I’m still working there in one month, I have failed).

I work at a small, boutique wealth management firm, and I have many objections with how this business works. It’s pretty simple- in my opinion, the incentive structure (how we are paid) is in direct conflict with actually giving good advice. And no one knows what they are paying us. And we don’t give good advice because that is harder to sell. And it doesn’t matter because there is money everywhere. And the industry is changing and we are not. And the work environment is mildly toxic. Ok, let me explain all of this more clearly.

Fees/Revenue- This is our emphasis- all team meetings come back to discussions of revenue. This part of our business is very easy to understand. We are paid based on our Assets Under Management (average fee is just over 1%- that means a $1,000 account would pay us $10/year, a $1,000,000 account would pay us $10,000/year). We are also paid commissions for selling insurance and annuity contracts.

My objections:
- Logic of AUM model means we literally value people based on the size of their investable assets.
- The +1% fee is too high. The dollar amount for our larger clients can get truly absurd, even though we do the same stuff for them as everyone else. There is starting to be more attention to fees as competition and technology change the game, but we are still getting by the old way because we can (for now).
- % fee obscures the true amount that they are paying us. Also, fee is deducted automatically from their account, similar to payroll taxes, so they don’t even notice when it goes. Sorry, I can’t help but focus on the fee because it matters, especially over long time periods we are talking about hundreds of thousands of dollars. And this is to do something that basically amounts to a commodity service- investment management- and we won’t even do that as good as we could (I am an advocate for indexing- but we have to make things complicated and put portfolios together with 20+ holdings and an active/passive mix of funds).
- We push insurance (permanent life insurance specifically) and annuity contracts more than we should because of the commissions.

I would like to simply give financial advice, and charge a clear, transparent fee for those services, but because it wouldn’t be as profitable (you have any other theories?) we get this mess of a system.

This is getting long, so I’ll wrap up with some quick-hit ‘culture’ objections:
- I am in the office from 7:30 am – 6:00 pm at a minimum every day. That is soul-crushing when I have zero confidence that we are doing a Good Thing. I don’t have time to do much else.
- I have to wear a tie every day (not ideal)
- Charisma and delivery matter more than actually giving good advice.
- I make retirement plans for people. Most of these people project their income ‘needs’ in retirement to be something ridiculous like $20k/month throughout their thirty year retirement. I know, I know- I try to detach and I realize that this is their money and I have nothing to do with it and not to get overly moral with it, BUT I struggle. I struggle to care enough to actually dig in as much as I could, because 1. You’re very rich, it doesn’t matter and 2. The discussion on fees above taints everything.
- My bosses will say one thing in this meeting and the opposite in the next- it’s whatever the client wants to hear- whatever will get the deal done. They are completely malleable to what the situation requires. Kind of amazing to watch, but mostly sad/upsetting.

Qualifiers
Overall, we are not robbing them. We do provide value in that we are giving people some structure and guidance (most people need this- the behavioral aspect can make a huge impact). They came to us and agreed to the terms, so what the heck. But it just gets stupid when you could say instead of engaging with us, make these two clicks and save $20,000/year. That is wasteful, and I do not like waste.

Agree/Disagree? Am I crazy? Feedback is welcome.

Comment author: Fluttershy 25 July 2015 07:20:00AM *  7 points [-]

Hi chalime,

Welcome to LW!

There are many of us here who share your views on the financial services industry, and index funds with low expense ratios have been strongly recommended in nearly all of the financial advice threads posted on LW. I once went to a career information session hosted by a botique wealth management firm myself, and ended up not even sending them my resume because of similar reasons regarding my personal fit with the field, and value of services provided by advisers.

The 80,000 Hours blog has historically mentioned that the good done by donating a small part of one's income to excellent charities likely outweighs any harm done by a career in the financial services industry. However, if working for a wealth management company doesn't feel like a good fit to you, you certainly shouldn't feel morally obligated to stay with them for earning-to-give reasons!

Comment author: chalime 26 July 2015 03:34:11PM *  2 points [-]

Thanks for the reply Fhuttersly-

Yes, I’ll be honest, my mind is made up. There is no way I can continue to do this every day- it’s just not sustainable.

It’s a little scary because this is already my second job since graduating, and even if I think I have good reasons for leaving, that stuff is not easy to explain.

Comment author: Vaniver 28 July 2015 01:27:21PM *  4 points [-]

Welcome!

So, presumably you're familiar with companies like Vanguard, Wealthfront, and Betterment, which are much more customer-aligned than the rest of the financial services industry. But part of that is spending much less attention on individual clients--and, consequently, employing considerably fewer people, and different sorts of people. (I would expect that Wealthfront needs more web programmers than economists, for example.) You might consider applying at those places, but my suspicion is you'll end up in another field entirely.

Comment author: chalime 29 July 2015 04:32:43AM *  5 points [-]

Yep, I've actually already applied to all three of those places. Vanguard would be my first choice of the three because I could do more outside of focusing strictly on investments, and actually have an advisor type relationship with people. You're right though in that I do have hesitations about being in this industry at all, because:

  • I am too anti-fee (e.g. why pay a fee on an IRA account at Wealthfront/Betterment? Yes, it’s better than what most people would do on their own, but it’s still not the optimal… I go back and forth on this one, because I do put a high value on the simplicity of it).
  • The business is based on meeting with lots of people and selling to them, and the people I would get along with the best are probably doing this stuff themselves
  • There’s tension between what this would be focused on (manage money effectively, accumulate wealth) vs. my desire to be more EA and act on the knowledge that I have enough, and many others do not.

I haven’t heard back from any of the applications, so it’s a moot point right now.

Comment author: Viliam 28 July 2015 08:42:31AM 2 points [-]

Maybe someone on LW could recommend you a better job. Either here (but you would have to tell us at least what country are you from) or at a local meetup.

Comment author: chalime 14 August 2015 06:10:55PM 2 points [-]

Well, I pulled the trigger yesterday. While it felt great to actually speak my mind and have a real discussion regarding all of these issues (it was actually pretty amazing- no yelling or anyone getting upset- there was actual discourse), I will now be jobless in a month, and I really don’t know the answer to what’s next.

I’m debating between staying in my current area which would be a finance/accounting/operations type of role or just scrapping that whole path and try to go the programming route (close to zero expertise as of now). I’ve spent a lot of time working towards different credentials (CPA being the main one) so it’s hard to walk away from that even though I don’t think I’m learning anything all that useful.

I’ve never met anyone from these communities (Lesswrong/EA), but I spend a lot of time here, so yeah I would definitely be open to talking with anyone here about general strategy (I’ve read all of 80000 hours) or specific opportunities if someone stumbles across this and has an idea. I will use more conventional methods as well, but I wanted to at least put this out there.

Comment author: Viliam 14 August 2015 06:53:36PM *  4 points [-]

You may want to post your question in an Open Thread. Maybe it would be more strategic to skip the current one which already contains over 200 comments, and wait for the new one to appear on Wednesday 19th; so more people will see it. Better than here in a thread that started three weeks ago.

I know almost nothing about the situation in "finance/accounting/operations type of role". I have mostly been a programmer, so now my availability bias screams at me that "everyone is a programmer, and everything outside of IT is super rare", which is obviously a nonsense. If there is a website in your country with job offers, perhaps you could try to imagine that you already have 3 years of experience, and look how many opportunities are there for each option and how well they pay.

My experience with programming in Java was that about 50% of jobs available are programs for some kind of financial institutions. (But this may be irrelevant for you; I am describing Eastern Europe.) The companies usually need some analyst to talk with the customer and explain their needs to the programmers. If you have a good financial backgroud, this could be the right job for you.

Programming could be risky, because it's not for everyone. You should probably try it first in your free time. (Hint: If you don't like programming in your free time, then the job probably is not the right one for you.) Also, after a few years the programmers usually hit the salary ceiling, and want to switch to managers or analysts. (Again, in Eastern Europe; I don't know how universal this is.) If you could start as an analyst, you would be already ahead of me in the IT career, and I am almost twice your age with about 20 years of programming experience.

I have a friend who works in IT and makes more money than I do despite being a worse programmer, because he is a specialist: in his case it is finance and databases; also he is willing to travel to a customer in a different country whenever necessary. So the lesson is -- don't throw your specialization away just because you want to go to IT; instead try finding a place where they will value your specialization.

Also, tell us where you live, so the people living near you can contact you. Networking: it's what brings the good jobs (as opposed to random jobs).

Comment author: Lumifer 14 August 2015 06:44:10PM 2 points [-]

I thought you had issues with the financial services industry -- if you are an accountant you can work as an accountant in any industry you want including non-profits.

Comment author: mrexpresso 23 July 2015 09:09:17PM *  12 points [-]

Hey less wrong.

I am Vicente and I am new here, i have been lurking here for one or two months and I have just created an account two or three weeks ago.

And right now I am reading Rationality: From AI to Zombies from Eliezer Yudkowsky.

some facts about me:

  • I live in Quebec city, Canada
  • I am under 18 but you will never know my age
  • I love computer science and I know php a little bit of c and html css (but those are not real programming languages)
  • I love and use free software (like in freedom)
  • the distro that I use is Debian gnu/linux

and that it!

and also:

i wanted to know howto have a bio in your user page like eliezer page.

Comment author: [deleted] 23 July 2015 10:11:47PM 8 points [-]

Hi Vicente!

To make an user profile, set up an account on the Wiki, with the same name as your LessWrong account. Then make a user page for it. After a day, LW will automatically use that to make your profile page

How did you found out about Less Wrong? What's been the most interesting part about the writings so far?

Comment author: mrexpresso 24 July 2015 12:43:09AM 6 points [-]

I found out about LW in a french video and then i just remembered the site name, Two or three months later I came visit the site and i read some post and i found it interesting so after that I came back and discover that the site was power by Reddit code and I check the reddit source code on github and then I discover it was a fsf(free software foundation) approved license so I decide to create an account plus I already was on reddit.

well for the reading i am only at the page number 23 (I just started) but so far:

why true, book 1 section 1 sub-section 3

Feeling Rational, book 1 section 1 sub-section 2

And for the help thanks I will try later.

NOTE: why your username is asd has it some thing to do with Autism spectrum disorder?

Comment author: [deleted] 24 July 2015 05:38:01AM 4 points [-]

It's interesting that you took such a note from the fact that LW is powered by reddit, why was that so interesting?

NOTE: why your username is asd has it some thing to do with Autism spectrum disorder?

No, not at all. It's a version of "asdf" which is the first thing you write if you start to write nonsense on your keyboard, and it doesn't have any explicit symbolism.

Comment author: mrexpresso 24 July 2015 12:53:56PM *  3 points [-]

Because i try to avoid non-free software and sites but I make some exceptions for sites like google and etc because there are no good free alternative.

But if there where I will be the first to switch.

NOTE: free as in freedom

Comment author: Lumifer 24 July 2015 11:09:03PM 1 point [-]

I understand the concept of libre and non-libre software, but what are "non-free websites"?

Comment author: mrexpresso 25 July 2015 12:21:11AM *  2 points [-]

non-free websites are websites that use non-free code (non-free license or are Proprietary)

but my philosophy is that if there not any free alternative then I will use the site anyway.

But any good free alternative I will be the first to switch.

NOTE: free as in freedom

Comment author: [deleted] 24 July 2015 09:34:14PM 1 point [-]

That's very interesting code of honour!

Do you have anything in mind on how you'd like to contribute on LW or do you have such plans?

Comment author: mrexpresso 24 July 2015 10:48:28PM *  2 points [-]

I think I will be contributing to the discussion section and maybe when I get enough karma I will see what I can post in the main section.

Comment author: [deleted] 15 August 2015 08:11:41AM 11 points [-]

Hi everyone.

I'm about to start my second year of college in Utah. My intent is to major in math and/or computer science, although more generally I'm interested in many of the subjects that LessWrongers seem to gravitate towards (philosophy, physics, psychology, economics, etc.)

I first noticed something that Eliezer Yudkowsky posted on Facebook several months ago, and have since been quietly exploring the rationality-sphere and surrounding digital territories (although I'm no longer on FB). Joining LessWrong seemed like the obvious next step given the time I had spent on adjacent sites. I'm here solely out of curiosity and philosophical interest.

Thanks to Sarunas and predecessors for the welcome page, and the LW community more generally. I look forward to being a part of it.

Comment author: Vladimir_Nesov 15 August 2015 07:13:38PM 2 points [-]

I'm here solely out of curiosity and philosophical interest.

And if you did in fact have a secret agenda, you wouldn't reveal it.

Comment author: [deleted] 15 August 2015 07:53:50PM 6 points [-]

Psst, it's way more fun to treat everyone on LW as having a secret agenda.

Comment author: Stephen_Cole 15 August 2015 03:43:21PM 3 points [-]

Exciting! If I were in your place I would look at the growing field of causal inference which lives at the interface of statistics, computer science, epidemiology and economics. The books by Hernan and Robins (causal inference) and Pearl (causality), as well as the journal edited by Judea Pearl and Maya Petersen (causal inference).

Comment author: [deleted] 15 August 2015 08:04:08PM 3 points [-]

Thanks for the recommendations (esp. Hernan and Robins). I'll definitely take a look.

Comment author: csandon 29 August 2015 05:38:58AM 10 points [-]

Hi, I am a graduate student who is working on getting a PhD in math. My journey here started when I took a moral philosophy course as an undergrad that made me think about what I should do. I decided that I should do my best to improve the world, and I eventually decided that existential risk mitigation was the highest priority improvement. Researching that lead me here, I lurked for a few years, and now I have finally made an account.

I am hoping to get some insight here as to whether it would be most effective for me to work on the AI friendliness problem, donate money, or something else. I am also interested in learning how to manage routine aspects of my life better.

Comment author: Della 12 August 2015 12:36:06AM 10 points [-]

Hello! I'm Alex, from Maryland, but I go to college in Ithaca, NY, where I am working on my math major/computer science minor. Way back when, a few of my friends kept talking about how great HPMOR was, so I started reading and I loved it. It is one of my all-time favorite stories. As I was reading it, I was very interested by all the ways Harry knew how to think right, and then one of my friends recommended the sequences and I read them all! Except for metaethics and quantum stuff.

I really enjoyed the sequences. They changed how I think. I managed to climb out of the agnostic trap of "you can neither prove nor disprove the existence of a deity". I plan on becoming even more rational. I've heard CFAR is a good resource.

I had been reading the posts on the main page for a while when I saw the most recent census and felt guilty about taking it without an account, so I made one but haven't used it until now. I didn't feel right commenting in other places when I hadn't introduced myself, but I am finally done putting it off!

Comment author: anna_macdonald 17 October 2015 12:44:09AM 9 points [-]

Hi LWers.

My brothers got me into HPMOR, I started reading a couple sequences, switched over to reading the full Rationality: AI to Zombies, and recently finished that. The last few days, I've been browsing around LW semi-randomly, reading posts about starting to apply the concepts and about fighting akrasia.

I'm guessing I'm atypical for an LW reader: I'm a stay-at-home mom. Any others of those on here?

Comment author: Alicorn 17 October 2015 01:40:34AM 5 points [-]

I'm not a mom yet but I'm effectively a house spouse :)

Comment author: Gram_Stone 17 October 2015 02:03:59AM *  3 points [-]

There are definitely a lot of parents on LessWrong. I'm sure there are at least a few stay-at-home moms.

In fact, 18.4% of the participants in the 2014 LW Survey have children, and 0.5% (8 people) describe themselves as 'homemakers.'

Comment author: anna_macdonald 17 October 2015 02:49:29AM 3 points [-]

Thanks for the link! I made a (brief, low effort) attempt to find that post earlier, but only came across the census surveys, not the results.

Heck, there's even one survey respondent who has more kids than I do. Cool beans.

Comment author: Vaniver 17 October 2015 06:42:56AM 2 points [-]

Welcome!

How many kids, and how old are they?

Comment author: anna_macdonald 20 October 2015 10:54:52PM 3 points [-]

6... 7 if you count my adult step-daughter (who I didn't really help raise). Ages 12, 11, 9, 7, 5, and 7-months.

Comment author: Vaniver 21 October 2015 02:57:35PM 1 point [-]

Impressive! Both of my parents came from huge households (7 and 8), but I had the more typical upbringing with only one sibling, who was only slightly older.

Comment author: anna_macdonald 21 October 2015 04:10:06PM 1 point [-]

My mom was one of 11, my dad one of 4; I am one of 7 myself. It definitely makes having a big family feel more natural.

Comment author: riparianx 26 August 2015 08:06:09PM 9 points [-]

Hi, I'm Alexandra. I'm turning 18 tomorrow, and I'm slowly coming to the conclusion that I have GOT to be more rigorous in my self-improvement if I'm going to manage to reach my ambitions.

I'm not quite a new member- I've lurked a lot, and even made a post a while back that got a decent number of comments and karma.

I discovered Less Wrong through HPMOR. It was the first time I'd read a story with genuinely intelligent characters, and the things in it resonated a lot with me. This was a couple of years ago. I've spent a lot of time here and on the various other sites the rationalist community likes.

I'm mostly posting this now because I'd like to get more involved. I recently read an article that said the best way to increase competency at a subject is to join a community revolving around the subject. I live in OKC, where I've never even HEARD of another student of rationality. The closest I've gotten is introducing my boyfriend to HPMOR.

I'm a biology student at a community college near my living space. I'm very good at biology, english, philosophy, etc. I'm really, REALLY bad at chemistry/physics and math. I've done some basic research into what makes a person suck at mathematical things, but it's been frustratingly low on insights. Most of the time, it's resulted in "you need to practice! you need to learn mathematical thinking!" which is objectively true, but practically, a little more detail in what to do about it would be nice. Practice hasn't really seemed to help too much beyond working problems. Give me an equation and variables and I can do the math. But I can't EXPLAIN anything, or apply it to non-obvious problems involving it. This is seriously getting in the way of both my biology studies and my study of rationality. I took general chemistry 1 twice to get a low B. I'm in the first two weeks of general chemistry 2 and it takes ages to get what seems like basic concepts. When I discovered I magically had a B in College Algebra, I suspected the professor curved the grade without telling us. I withdrew from precalc after three weeks because I realized I couldn't cope.

I'm hoping to get into contact with some of the more mathematically inclined people here who are willing to help. I considered emailing a few of the higher-profile contributors to the community, but frankly, they're intimidating and the idea is very scary to my inner caveman worrying about being kicked out of the tribe.

I have some pretty lofty goals for my future research- I want to go into genetically modified organisms, and try to improve nutrition and caloric intake in parts of the world where that sort of thing is difficult to get. Reducing scarcity in our society seems like a good start to a general boost in the "goodness" of the world. But there is absolutely no way I can succeed at this if I can't get a good handle on math and chemistry. My skill at the lower levels of biology is only going to carry me so far.

I've probably rambled enough, so thanks if you took the time to read. If, for some strange reason, you feel a pull towards helping a struggling student get a grasp on abstract thinking, I urge you to give into the temptation because oh god I need the help.

Comment author: CCC 06 October 2015 10:28:38AM 2 points [-]

Hi, Alexandria!

I'm really, REALLY bad at chemistry/physics and math. ... Practice hasn't really seemed to help too much beyond working problems. Give me an equation and variables and I can do the math. But I can't EXPLAIN anything, or apply it to non-obvious problems involving it.

Okay... I am one of those people who is really good at math. Of course, I cannot be certain, but I suspect that the trouble here might be that you failed to grasp some essential point way, way back at the early stages of your mathematical education.

So, let's see how you handle a non-obvious problem. In answering this question, I'd like you to show me, as far as possible, your entire reasoning process, start to finish; the more information you can give, the more helpful my further responses can be.

The question is as follows: John is on his way to an important meeting; he has to be there at noon. Before leaving home, he has calculated what his average speed has to be to arrive at his meeting on time. When he is exactly half-way to his destination, he calculates his average speed so far, and to his dismay he finds that it is half the value that it needs to be.

How fast does John need to travel on the second half of his journey in order to reach his destination on time?

Comment author: [deleted] 06 October 2015 04:51:24AM *  2 points [-]

Hello, Alexandra.

I also struggle with the math thing. My secret to success is practicing until I'm miserable, but these things also help:

  1. Read layman books about mathematical history, theory, and research. It ignites enthusiasm. I recommend James Glieck's [sp?] book Chaos, and his book The Information. He has a talent for weaving compelling narratives around the science.

  2. Learn a little bit of programming. While coding is frustrating in its own right, I find that it forces me to think mathematically. I can't leave steps out. I'm learning Python right now, and it's a good introductory language (I'm told).

  3. Explain it to your cat. I'm only mostly kidding. I've found that tutoring lower-level math has helped my skills in calculus and statistics. Learning to walk through the problems in a coherent way, so that a moody sixth-grader can understand it, is tremendously helpful.

I'd love to work together on exploring mathematical concepts. If you'd like to collaborate, hit me up sometime.

Also: if you like HPMOR, you should read Luminosity. It is a rationality-driven version of Twilight that's actually really good.

Comment author: riparianx 18 October 2015 05:21:23AM 1 point [-]

I will do that. I think I may actually have a copy of Chaos lying around. I've actually read (most of) Luminosity- I lost my place in the story at one point due to computer issues and never got back to it.

I tried CodeAcademy once, didn't find it that interesting. I don't think it used python, though. I'll check it out. Programming is in general very useful.

If I can find someone to tutor, I'll try that. It certainly can't hurt. Thank you!

Comment author: PAM606 11 August 2015 06:10:41PM *  9 points [-]

Well since I'm procrastinating on important things I might as well use this time to introduce myself. Structured procrastination for the win!

Hello everyone, I have been poking around on less wrong , slater star codex and related places for around three to four years now but mostly lurking. I have gradually become more and more taken with the risks of artificial intelligence orders of magnitudes smarter than us Homo Sapiens. In that aspect, I'm glad that the topic of a super-intelligent AI has taken off into the mainstream media and academia. EY isn't the lonely crank with no real academic affiliation, a nerdy Cassandra of his time, spewing nonsense on the internet anymore. From what I gather, status games are so cliche here that it's not cool. But with endorsements by people like Hawking and Gates, people can't easily dismiss these ideas anymore. I feel like this is a massively good thing because with these ideas up in the air so to speak, even intelligent AI researchers who disagree on these topics will probably not accidentally build an AI that will turn us all into paper clips to maximize happiness. That is not to say that there doesn't exist numerous other failure pathways. Maybe someday notions such I. J. Good's idea of a improving intelligence feedback loop will make it's way into standard AI textbooks. You don't have to join the lw sub-community to understand the risks, neither do you have to read through the sequences and all that. IMO, the greatest good less wrong has done so far for the world is to propagate and legitimate these concerns. I'm aware of the other key ideas in the memespace of lesswrong(rationality and all that) but it's hard enough to get the general public and other academics and researchers to take concern about super intelligent AI as an existential risk seriously without all sort of other ideas outside of their inference bubble.

Intellectually, my background is in physics.(currently studying, along with requisite math you pick up from physics) I have been reading philosophy for a ridiculous long time(around seven years now) although as a part time hobby. Probably like most people here, I have an incurable addiction to the internet. I also read a lot, in varied intellectual fields. I read a lot of fiction, anything from Milton to YA books. Science fiction and fantasy probably is responsible for why I find trans-humanist notions so easy to swallow. You read enough F. A Hamilton and Greg Egan and things like living forever and super intelligent machines are downright tame in comparison. I like every academic subject, gender studies doesn't count. Neuroscience, economics, computer science.. you name it. Even "fluffy" stuff like sociology and psychology and literature. I am doomed to be caught between the two cultures( C.P. Snow)

As to the stuff regarding rationality and cognitive biases, while the scientific evidence wasn't in until fairly recently. Hume anticipated all it centuries ago. Now I know lesswrong isn't very impressed with a prior armchair philosophizing without a scrap of evidence, I have to disagree on account of correct theories being much easier to build off empirical data and that deducing the correct theory to explain natural phenomenon without any empirical data in terms of experiments is much much harder. Hume had a huge possibility space while modern psychologists and cognitive scientists have a much small one. Let's not forget Hume's most famous quote. "“If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.” I honestly can't say I was surprised by the framework presented in the series like most people were, it's sure nice to find a community that was thinking on the same lines that I do! A lot of the tactics to apply these ideas so I can overcame these heuristics were very nice and welcome. My favorite aspect of lw has to be that people has an agreed framework to discuss things and in theory we can come to agreements. Debating is one of my favorite things to do and frankly most people are not worth arguing with and is a waste of time.

I'm interested in contributing in the study of friendly AI and have some ideas regarding it. So I might post here in the future stuff I'm thinking about. Please please feel free to criticize such posts to your heart's content. I appreciate feedback much more than I care about slights or insults so feel free to be rude. My ideas are probably old or wrong anyway, I have't had time to look through all the literature presented here or elsewhere.

Lastly, I should mention I have been active in the lesswrong irc room. If you want to find me, I'm there. also if lukeprog sees this, I really liked the the literature summaries you post sometimes. it's been a huge help and saved me a ton of time in my own exploration of the scientific literature.

Comment author: JosephRogero 15 February 2016 02:06:33AM 8 points [-]

Hello from Houston, Texas! I've been following LessWrong for several years now, slowly working my way through the Sequences. I'm an aspiring fantasy/sci-fi writer, martial artist, and outdoorsman and I am overjoyed to be a part of the LW community. It's hard for me to say exactly when I first 'clicked' on rationality, but the Tsuyoku Naritai post certainly struck a chord for me.

A few months ago, I attended a LessWrong meetup in Austin. I enjoyed the meetup immensely, not least because it also happened to be a Petrov Day celebration. I'd like to attend LW meetups more frequently, but I live in Spring (north Houston) and the Austin meetup is a 3+ hour drive for me.

So, I've decided to start a Houston meetup group. According to some (admittedly old) statistics, the number of visitors to LessWrong from the Houston area is over 9000, and I think this is more than enough to create an enjoyable meetup group.

Our first meetup will be Saturday, February 20 at the Black Walnut Cafe in the Woodlands, TX. It will start at 1:00PM and go until 4:00PM (or later, if enough people show up and are interested in staying).

If you're interested, please reply below so I know who to expect!

Comment author: [deleted] 15 February 2016 03:17:01PM 1 point [-]

Hi, and welcome!

I'm hoping to start a Meetup group sometime this spring or summer. If you're amenable to it, I may bug you afterwards and see how your meetup went.

Comment author: Vamair0 19 September 2015 09:47:30AM *  8 points [-]

Hello. My name is Andrey, I'm a C++ programmer from Russia. I've been lurking here for about three years. As many others I've found this site by link from HPMOR. The biggest reason for joining in the first place was that I believe the community is right about a lot of important things, and the comments of quality that's difficult to find in the bigger Net. I've already finished reading the Sequences and right now I'm interested in ethics and I believe I've got a few ideas to discuss.

For the origin story as a rationalist, as it often happens it's all started with a crisis of faith. Actually, the second one. The first was a turn from Christianity to a complicated New Age paradigm I'll maybe explain later. The second was prompted by a question of why I believe some of the things I believe in. While I used to think there was a lot of evidence for the supernatural, I've started trying to verify them and also read religion apologetics to evaluate the best arguments they have. Yup, they were bad. The world doesn't look like there exists a powerful interventionist deity. (And even if the miracles they were talking about that happen right now are true miracles, all of them are better explained with not at all omnipotent or omniscient slightly magical fairies). This, coupled with my interests for physics and biology made me think there are problems that are both huge and don't get the attention they deserve. Like, y'know, death or catastrophic changes. And all we've got are some resources, some understanding of how things actually are and a limited ability to cooperate with each other.

I'm looking forward to discuss stuff with people here.

Comment author: [deleted] 06 October 2015 04:42:40AM 4 points [-]

Hi there Andrey!

I am also a former apologist (aspiring, anyways - teenage girls aren't taken very seriously by theologians). I clung to my faith so hard. It's amazing how much the evidence there is against the classical notion of the supernatural. It's a snowball effect. Every piece stripped away another aspect of my fundamentalism, until I was a socially-liberal Christian. Then, an agnostic theist. Then, an agnostic atheist.

I'm also looking forward to getting involved with the community. The high standards for conversation here are intimidating, but it's exciting, too.

Comment author: BiasedBayes 13 September 2015 03:51:11PM *  8 points [-]

Hello all!

Im a medical student and a researcher. My interests are consciousness, computational theory of mind, evolutionary psychology, and medical decision making. I bought Eliezers book and found here because of it.

Want to thank Eliezer for writing the book, best writing i have read this year. Thank You.

Comment author: hyporational 14 September 2015 02:51:47AM 5 points [-]

Welcome! I'm an MD and haven't yet figured out why there are so few of us here, given the importance of rationality for medical decision making. It's interesting that at least in my country there is zero training in cognitive biases in the curriculum.

Comment author: Anders_H 14 September 2015 04:16:31AM *  7 points [-]

I have the Irish equivalent of an MD; "Medical Bachelor, Bachelor of Surgery, Bachelor of the Art of Obstetrics". This unwieldy degree puts me in fairly decent company on Less Wrong.

I may be generalizing from a sample of one, but my impression is that medicine selects out rationalists for the following reasons:

(1) The human body is an incompletely understood highly complex system; the consequences of manipulating any of the components can generally not be predicted from an understanding of the overall system. Medicine therefore necessarily has to rely heavily on memorization (at least until we get algorithms that take care of the memorization)

(2) A large component of successful practice of medicine is the ability to play the socially expected part of a doctor.

(3) From a financial perspective, medical school is a junk investment after you consider the opportunity costs. Consider the years in training, the number of hours worked, the high stakes and high pressure, the possibility of being sued etc. For mainstream society, this idea sounds almost contrarian, so rationalists may be more likely to recognize it.

--

My story may be relevant here: I was a middling medical student; I did well in those of the pre-clinical courses that did not rely too heavily on memorization, but barely scraped by in many of the clinical rotations. I never had any real passion for medicine, and this was certainly reflected in my performance.

When I worked as an intern physician, I realized that my map of the human body was insufficiently detailed to confidently make clinical decisions; I still wonder whether my classmates were better at absorbing knowledge that I had missed out on, or if they are just better at exuding confidence under uncertainty.

I now work in a very subspecialized area of medical research that is better aligned with rational thinking; I essentially try to apply modern ideas about causal inference to comparative effectiveness research and medical decision making. I was genuinely surprised to find that I could perform at the top level at Harvard, substantially outperforming people who were in a different league from me in terms of their performance in medical school. I am not sure whether this says something about the importance of being genuinely motivated, or if it is a matter of different cognitive personalities.

In retrospect, I am happy with where this path has taken me, but I can't help but wonder if there was a shorter path to get here. If I could talk to my 18-year old self, I certainly would have told him to stay far away from medicine.

Comment author: EHeller 14 September 2015 05:43:44AM *  6 points [-]

I don't think medicine is a junk investment when you consider the opportunity cost, at least in the US.

Consider my sister, a fairly median medical school graduate in the US. After 4 years of medical school (plus her undergrad) she graduated with 150k in debt (at 6% or so). She then did a residency for 3 years making 50k a year, give or take. After that she became an attending with a starting salary of $220k. At younger than 30, she was in the top 4% of salaries in the US.

The opportunity cost is maybe ~45k*4 years, 180k + direct cost of 150k or so.. So $330k "lost to training," however 35+ years of making 100k a year more than some alternative version that didn't do medical school. Depending on investment and loan decisions by 5 years out you've recouped your investment.

Now, if you don't like medicine and hate the work, you've probably damned yourself to doing it anyway. Paying back that much loan is going to be tough working in any other job. But that is a different story than opportunity cost.

Comment author: hyporational 14 September 2015 04:13:19PM *  4 points [-]

Huh. My experience is somewhat similar to yours in the sense that I never was a big fan of memorization, and I'm glad that I could outsource some parts of the process to Anki. I also seem to outperform my peers in complex situations where ready made decision algorithms are not available, and outperformed them in the few courses in medschool that were not heavy on memorization. The complex situations obviously don't benefit from bayes too much, but they benefit from understanding the relevant cognitive biases.

The medical degree is a financial jackpot here in Finland, since I was actually paid for studying, and landed in one of the top 3 best paying professions in the country straight out of medschool. Money attracts every type, and the selection process doesn't especially favor rationalists, who happen to be rare. It just baffles me how the need for rationality doesn't become self evident for med students in the process of becoming a doctor, not to mention after that.

Comment author: Lumifer 14 September 2015 04:32:09PM 3 points [-]

how the need for rationality doesn't become self evident for med students in the process of becoming a doctor,

Is it just a matter of terminology? I would guess that all med students will agree that they should be able to make a correct diagnosis (where correct = corresponding to the underlying reality) and then prescribe appropriate treatment (where appropriate = effective in achieving goals set for this patient).

Comment author: hyporational 14 September 2015 04:44:57PM *  2 points [-]

Whatever the terminology, they should make the connection between the process of decision making and the science of decision making, which they don't seem to do. Medicine is like this isolated bubble where every insight must come from the medical community itself.

I found overcoming bias and became a rationalist during med school. Finding the blog was purely accidental, although I recognized the need for understanding my thinking, so I'm not sure what form this need would have taken given a slightly different circumstance.

Comment author: BiasedBayes 14 September 2015 12:15:30PM 3 points [-]

Thanks hyporational ! It is exactly same here. Cognitive biases, heuristics, or even Bayes Theorem (normative decision making) is not really taught here.

Also I once argued against some pseudoscientific treatment (in mental illnesses) and my arguments were completely ignored by 200 people because of argumentum ad hominem and attribute substitution (who looks like he is right vs. looking the actual arguments). Most people dont know what is a good argument or how to think about the propability of a statement.

Interesting points Anders_H, I have to think about those littlebit.

Comment author: hyporational 14 September 2015 04:39:41PM 5 points [-]

We were taught bayes in the form of predictive values, but this was pretty cursory. Challenging the medical professors' competence publicly isn't a smart move careerwise, unless they happen to be exceptionally rational and principled, unfortunately. There's a time to shut up and multiply, and a time to bend to the will of the elders :)

Comment author: Lumifer 14 September 2015 04:47:01PM 7 points [-]

Challenging the medical professors' competence publicly isn't a smart move careerwise

Reminds me of:

One day when I was a junior medical student, a very important Boston surgeon visited the school and delivered a great treatise on a large number of patients who had undergone successful operations for vascular reconstruction.

At the end of the lecture, a young student at the back of the room timidly asked, “Do you have any controls?” Well, the great surgeon drew himself up to his full height, hit the desk, and said, “Do you mean did I not operate on half the patients?” The hall grew very quiet then. The voice at the back of the room very hesitantly replied, “Yes, that’s what I had in mind.” Then the visitor’s fist really came down as he thundered, “Of course not. That would have doomed half of them to their death.

”God, it was quiet then, and one could scarcely hear the small voice ask, “Which half?”

Comment author: BiasedBayes 14 September 2015 05:36:35PM 2 points [-]

Yep :) You are definetely right career wise. Problem for me was the 200 other people who will absorb completely wrong idea of how the mind works if I wont say anything. Primum non nocere.

But yeah, this was 4 years ago anyway...just wanted to mention it as an anecdote of bad general reasoning and biases :)

Comment author: htimsxela 21 August 2015 07:57:27PM 8 points [-]

Hello LW,

My name is Alex, and while I first discovered LW 2-3 years ago, I have only visited the site sporadically since then. I have always found the discussion here intriguing and insightful, but never found myself motivated enough to dedicate time to joining the community (until now!).

I'm a 26 year old Canadian with an undergraduate degree majoring in chemistry and minoring in philosophy (with a healthy dose of physics on the side). I have always been very analytical and process driven, and I have used that to fuel my creativity, and develop a more thorough understanding of the world we find ourselves a part of. I have been self-employed since graduating, with the eventual goal of returning to school for a graduate degree.

In my undergrad, my strengths and interests were in synthetic/materials chemistry, as well as organic chemistry. I spent time working for a research group that specialized (largely) in group 14 nano-material chemistry, which I enjoyed immensely. The areas of philosophy I concentrated on were philosophy of science, computing & AI, theory of mind, and existentialism. In short, I avoided the 'historical overview' philosophy courses in favour of those which were more relevant to the rapidly changing technological world (not to say philosophers of times past are uninteresting or have no current relevance, but I think the LW audience will empathize).

I expect that my contributions here will, in some sense, help me parse out what I would like to dedicate my future institutional studies to. I value knowledge and truth, as well as academic integrity and humility. I am off put by individuals who are unquestioning or unable to logically reason effectively, so I hope I will find a good home here. The toolbox of logical reasoning has allowed mankind to build itself up out of the primordial muck, and it seems that mastery of these tools is essential for continued advancement (and perhaps even survival). So, in addition to the above, I hope that my time here will allow me to continue honing my own tools of logic.

Comment author: EngineerofScience 24 July 2015 10:07:27PM 8 points [-]

I joined lesswrong because my friends suggested it to me. I really like all the articles and the fact that the comments on the articles are useful and don't have lots of bad language. This really surprised me.

Comment author: cameroncowan 24 July 2015 03:38:30AM 8 points [-]

I think I've caused enough kerfluffles around here that many people know me but I'm Cameron. I've been on the site almost a year I think. BA and MA in Political Science. I have a regular interest in philosophy and I found out about the site from a disparaging article on Slate.com. I'm one of the weird spiritual people on her practicing western esoterica. In the past I've worked in media and PR. Currently, I'm a novelist in Tacoma, WA, USA and host of The Cameron Cowan Show, every monday and friday on youtube (fresh shows in August!) For more information, clips and All The News You Need To Know In 10 Minutes or Less (and why you should care about it), see me at CameronCowan.net! Thanks for reading!

Comment author: dglukhov 28 December 2016 09:29:51PM *  7 points [-]

Hello all,

I found this site from a link in the comments section of an SCP Foundation post, which consequently linked to one of Eliezer's stranger allegorical pieces about the dangers of runaway AI intelligence getting the best of us. I've been hooked since.

Thanks to this site, I'm relearning university physics through Feynman, have plans to pick up a couple textbooks from the recommended list, and plan on taking the opportunity to meet some hopefully intellectually stimulating people in person if any of the meetups you guys seem to regularly have manage to ever make it closer to the general Massachusetts area.

I recently graduated with a B.S in Chemistry with the now odd realization that I haven't really learned anything during my experiences at university. I hope participating here will alleviate this void of knowledge I could have potentially learned.

Furthermore, if I'm lucky, I might get to contribute to the plethora of useful discussions that seem to populate this site. If I'm even luckier, those contributions will be positive. Let's just hope I learn fast enough to make sure luck isn't the deciding factor for such an outcome.

I am also curious as to the level of regular activity this site receives, perhaps a link to some statistics? Any reply would be greatly appreciated.

Also, I don't know if this is really relevant here, but I'd like to mention that I have a weird dream of someday inventing direct mental communication between people that doesn't involve the use of language, or at the very least help such a project along if any exist. I don't know if anybody will care for such news, or even if this is a realistic goal to strive for considering the multitude of other priorities I have in life, but hey, it is what it is. Supposedly, meeting such a goal would at least require some optimization of my own ability to think clearly and correctly. Yet another reason to come here, no doubt.

Well, here goes nothing! Hi guys!

Comment author: Raemon 29 December 2016 05:27:56PM 1 point [-]

Welcome!

Comment author: [deleted] 06 October 2015 04:33:26AM *  7 points [-]

Hello!

I became interested in psychology at a young age, and irritated everyone around me by reading (and refusing to shut up about) the entire psych section of my local library. I had a difficult time at that age separating the "woo" from actual science, and am disappointed that I focused more on "trivia learned" and "books read" than actual retention. At any rate, I have a pretty good contextual knowledge of psychology, even if my specific knowledge is shaky. I put this knowledge to good use for seven years while I worked with developmentally delayed children.

I discovered Less Wrong in 2011/2009/2007/I actually have three distinct memories of discovering it at different times, but was turned off by the trend of atheism. I know how ridiculous that is for an aspiring rationalist, to reject evidence because it's uncomfortable. The "quiet strain" was too much, and I found the community exclusive and hard to break into. This site was not responsible for the disintegration of my faith, but it was another nudge in that direction. I don't know how to quantify my beliefs anymore; I think the God/No-God dichotomy is irrelevant. I'm perfectly willing to accept evidence of a superintelligent, superpowerful being. But assigning characteristics like "supernatural" is wrong. If such a creature exists, it's merely something we don't understand yet.

I am a lifelong fan of Harry Potter, so I've been keeping up with HPMOR off and on. I've decided to involve myself in this community now, because developing connections with rational people is a priority now. There are so many people having interesting, rational conversations, and I'd like to meet them. I'd like to participate in the public eye, as egotistical as that may sound. The concepts of rationality are getting mainstream attention, and those public-forum debates will be more and more crucial. I intend to be involved.

EDIT: I used the phrase "at any rate" too often

Comment author: ThePrussian 29 July 2015 12:13:48PM 7 points [-]

Hi everyone.

I've already posted a couple of pieces - probably should have visited this page first, especially before posting my last piece. Well, such is life.

I headed over to LessWrong because I was/am a bit burned out by the high-octane conversations that go on online. I've disagreed with some things I've read here, but never wanted to beat my head - or someone else's - against a wall. So, I'm here to learn. I like the sequences have picked up some good points already - especially about replacing the symbol with the substance.

Question - what's the ettiquette about linking stuff from one's own blog? I'm not trying to do self-promotion here, but there's one or two ideas I've developed elsewhere & would find it useful to refer to them.

Comment author: Viliam 30 July 2015 12:07:04PM 3 points [-]

what's the ettiquette about linking stuff from one's own blog?

My guess is: it is okay, if it would be okay to post the same content here. Please provide a short summary when linking.

Comment author: Username 30 July 2015 02:47:22PM 1 point [-]

Welcome!

Comment author: aaq 29 January 2016 08:08:59PM 6 points [-]

Hello from Boston. I've been reading LW since some point this summer. I like it a lot.

I'm an engineering student and willing to learn whatever it takes for me to tackle world problems like poverty, hunger and transmissible diseases. But for now I'm focusing my efforts on my degree.

Comment author: curtisrussell 04 November 2015 09:39:23PM 6 points [-]

Hello everyone! Came to less wrong as a lurker something like a two years ago (Perhaps more, my grasp on time is... fragile at best), and binged through all of HPMOR that was up then, and waited with bated breath for the rest. After a long time spent lurking, reading the blogs and then the e-book, I decided I wanted to do more than aimlessly wander through readings and sequences.

So here I am! I posted to the lounge on reddit, and now I'm posting here. The essence of why I'm posting now is simple: I want to start down a road towards aiding in the work towards FAI. I graduated a year and a half ago, and I want to start learning in a directed and purposeful way. So I'm here to ask for advice on where and how to get started, outside of standard higher education.

Comment author: John_Maxwell_IV 20 November 2015 04:29:03AM *  3 points [-]

Welcome! MIRI created a research guide for people interested in helping with FAI.

Comment author: Marko 22 October 2015 08:38:31PM 6 points [-]

Hello LessWrong!

I'm Marko, a mathematician from Germany. I like nerding around with epistemology, decision theory, statistics and the like. I've spent a few wonderful years with the Viennese rationality community and got to meet lots of other interesting and fun LessWrongians at the European Community Weekend this year. Now I'm in Zürich and want to build a similar group there.

Thanks for giving me so much food for thought!

Comment author: AleksTK 09 September 2015 07:43:31AM 6 points [-]

Hello LW,

I'm an aspiring rationalist from a community called PsychonautWiki. Our intent it to study and catalog all manner of altered states of continuousness in a legitimate and scientific manner. I am very interested in AGI and hope to understand the architecture and design choices for current major AGI projects.

I'll probably start a discussion for you guys tomorrow.

Aleks

Comment author: Viliam 09 September 2015 09:04:31AM *  4 points [-]

Hi Aleks!

Have you read "Mysticism and Pattern-Matching" at Slate Star Codex? What is your opinion?

Comment author: AleksTK 18 October 2015 03:26:35AM *  1 point [-]

Just read it. Fascinating.

https://psychonautwiki.org/wiki/Geometry

You might want to look into level 8B and 8A geometry.

Comment author: PeterCoin 16 August 2015 09:22:33PM *  6 points [-]

Hey y'all, I come here both as a friend and with an agenda. I'm scary.

See I have a crazy pet theory... (and yes it's a TOE, fancy that!)

...and I'd love to give it a small home on the Internet. Here?

This like to share it with you because this community seems to be be the proper blend of open minded and skeptical. Which is what the damn thing needs.

Anyways I've lurked for quite awhile, and you guys have been great at opening my mind to a lot of things. I figure this might be good enough and crazy enough to give something back.

As a personal note, I'm currently an engineer who is wondering if he should go back to school to become an academic. When I was a college student at a big faceless university, I was too awkward, clueless, and erratic to navigate the system in a way that got me attention so grabbed my degree and ran.

BTW I'm not one of those foaming at the mouth mofos who will debate endlessly and fruitlessly in an attacking manner toward anyone who dare criticize his crackpot theory. I'm more like "man, why does this idea have to be so damn compelling, better get it out on the web". I've also posted it extremely little thus far, I do not design to spray it all over the internet.

Comment author: Jiro 17 August 2015 03:10:06PM 2 points [-]

BTW I'm not one of those foaming at the mouth mofos who will debate endlessly and fruitlessly in an attacking manner toward anyone who dare criticize his crackpot theory.

The response to your theory, though, will depend on whether it's one of those. And the response to "should I tell you my new theory" will depend on the fact that such theories have some probability of being one of those. Ultimately, you have to tell us the theory to know how we'll react.

Comment author: selador 10 August 2015 04:52:52PM 6 points [-]

Hi LW,

I got interested in rationality from the books Irrationality, some others I can't remember in between and later, fast and slow. Somehow I found HPMOR, which I loved, and through that, found this. Other influences have included growing up with quite strongly religious parents (first win for the power of the question - but why do you believe that, first loss for thinking that because something was obvious to me I could snap my fingers and make it obvious to others.)

What I'm doing: I'm in my twenties, working in the energy sector because I started following global warming and resource shortages when I was 16, thought it was an important area and decided to go work in that. The things I have learnt from an engineering degree, a bit more life, and LW, mean that I don't necessarily still believe it is THE area of importance, but as an area, I'm happy enough in it for now. My job basically involves lots of programming, modelling and data handling anyway so that is fun! I get to encounter my biases in the work environment occassionally/ regularly as I have to try and work out how much confidence to have in the data available and in my various theories. For my job at least, I do find attempting to debiase useful at a day to day level, if not as useful as being a significantly better programmer would be.

So far on less wrong I have read about half the sequences, of which the most resonant for me was the one on cached thoughts. Whilst simple, this drew together a bunch of other points learnt, and which felt really clearly, like how I think. I'm reading the sequences from a link on here that I've put on my kindle. This is pretty good, but I don't know how much shorter the A-Z version is. I do skip occasional bits. I feel that a little graph of the sequences, similar to the very simple one on http://hyperphysics.phy-astr.gsu.edu/hbase/hframe.html would help newbies negotiate through.

Anyway, I'm continuing enjoying to read posts on here, but hope to start contributing by: a) Continuing to try and help others to learn (at least the basics) of this stuff b) Maybe setting up a meetup in Bristol, UK. c) Posting some thoughts up to the hive mind if and when I have some worth sharing

Comment author: ShaneC 03 August 2015 03:50:34AM 6 points [-]

Hello from Canada! I study computer science and philosophy at the University of Waterloo. Above anything, I love mathematics. The certainty that comes from a mathematical proof is amazing, and it fuels my current position about epistemology (see below). My favourite courses for mathematics so far have been the introductory course about proofs, and a course about formal logic (the axioms of first order logic, deduction rules, etc). Philosophy has always been very interesting to me: I've taken courses about epistemology, ethics, the philosophy language; I am also currently taking a course about political philosophy, and am reading Nietzsche on the side. I also love to debate. Although I don't practice Christianity anymore, I loved debating about religion with my friends.

I have come to Less Wrong to talk about my epistemological views. It is a form of skepticism. I view (i.e. define) truth exclusively as the outcome of some rational system. I reject all claims unless they are given in terms of a rational system by which it can be deduced. Even when such a system is given, I would call the claim true only given the context of the rational system at hand and not (necessarily) under any other systems.

For example, "2 + 2 = 4" is true when we are using the conventional meanings for 2, 4, +, and =, along with a deductive system that takes expressions such as "2 + 2 = 4" and spits out true or false. On the contrary, "2 + 2 = 4" is false when we use the usual definitions of 2 and 4 and = but + being defined for x and y and the (regular) sum of x and y minus one. This is an illustration of the truth of a claim only making sense once it has precise meaning, axioms that are assumed to be true, and some system of deduction.

When a toddler sees the blue sky and asks his mother why the sky is blue and she responds with something about the scattering of light, he has a choice: either he accepts the system of scattering implies blueness, or he can ask again: "Why?" She might reply with something about molecules, etc... Eventually, the toddler seems to have two choices: either he must accept that the axioms of the scientific method are true just because or reject the whole thing for not being justified all the way through.

My view on epistemology is distinct from the above options. It wouldn't reject the whole system (useless; no knowledge) or truly believe in the axioms of the scientific method (naive; they could be wrong). It would appreciate the intrinsic nature of the ideas; that the scattering of light can imply that the sky is blue. It would view rational systems as tools that can be used and then put away, rather than thing that have to be carried around your whole life.

What do you think about this? Can you suggest any related readings?

Comment author: Sarunas 07 August 2015 07:25:40AM 1 point [-]

This sounds similar to Coherence theory of truth.

Comment author: phl43 26 January 2017 09:15:24PM *  5 points [-]

Hi everyone,

I'm a PhD candidate at Cornell, where I work on logic and philosophy of science. I learned about Less Wrong from Slate Star Codex and someone I used to date told me she really liked it. I recently started a blog where I plan to post my thoughts about random topics: http://necpluribusimpar.net. For instance, I wrote a post (http://necpluribusimpar.net/slavery-and-capitalism/) against the widely held but false belief that much of the US wealth derives from slavery and that without slavery the industrial revolution wouldn't have happened, as well as another (http://necpluribusimpar.net/election-models-not-predict-trumps-victory/) in which I explain how election models work and why they didn't predict Trump's victory. I think members of Less Wrong will find my blog interesting or, at least, that's what I hope. I welcome any criticisms, suggestions, etc.

Philippe

Comment author: DryHeap 21 November 2016 03:35:10PM 5 points [-]

Hello all,

South Carolinian uni student. Been lurking here for some time. Once my desire to give an input came to a boil, I decided to go ahead and make an account. Mathematics, CompSci, and various forms of Biology are my intensive studies.

Less intense hobbies include music theory, politics, game theory, and cultural studies. I'm more of a 'genetics is the seed, culture is the flower' kind of guy.

The art of manipulation is fascinating to me; sometimes, when one knows their audience, one must make non-rational appeals to their audience to persuade them. This is why I rarely consider any political movement to be ignorant when they make certain non-rational claims; it is the sweet art of manipulation blossoming. Whether or not a certain political movement is using these tactics for a beneficial end-game is up for debate, but I nevertheless stray from calling the heads of political philosophies 'stupid'. (Note: the followers may be useful idiots)

Very nice forum. I appreciate the culture here, and these dialogues rank with Plato.

Comment author: Viliam 23 November 2016 09:02:39AM *  1 point [-]

Welcome!

Note: the followers may be useful idiots

I partially agree, but I believe there is usually no clear dividing line between "those who know, and use irrational claims strategically" and "the followers who drink the kool-aid".

First, peer pressure is a thing. Even if you consciously invent a lie, when everyone in your social group keeps repeating it, it will create an enormous emotional pressure on you to rationalize "well, my intention was to invent a lie, but it seems like I accidentally stumbled upon an important piece of truth". Or more simply, you start believing that the strong version of X is the lie you invented, but some weaker variant of X is actually true.

Second, unless there is a formal conspiracy coordination among the alpha lizardmen, it is possible that leader A will create and spread a lie X without explaining to leader B what happened, and leader B will create and spread a lie Y without explaining to leader A what happened, so at the end both of them are the manipulators and the sheep at the same time.

Comment author: DryHeap 29 November 2016 07:50:24PM 0 points [-]

Very good point. On a similar note: we often don't consider whether we have empirically tested what we, ourselves, believe to be true. Most often, we have not. I'd wager that we are all 'useful idiots' of a sort.

Comment author: niceguyanon 30 November 2016 03:49:24PM 0 points [-]

we are all 'useful idiots' of a sort.

It's sheep all the way up!

Comment author: Lumifer 30 November 2016 05:49:05PM 0 points [-]

Sheep all the way up, turtles all the way down, and here we are stuck in the middle!

Comment author: entirelyuseless 23 November 2016 03:14:11PM 0 points [-]

"Or more simply, you start believing that the strong version of X is the lie you invented, but some weaker variant of X is actually true."

That's true, but in most cases it is in fact the case that some weaker variant is true, and this explains why you were able to convince people of the lie.

That said, this process is not in general a good way to discover the truth.

Comment author: Viliam 23 November 2016 06:12:52PM 0 points [-]

I would still expect a shift towards the group beliefs; e.g. if the actual value of some x is 5, and the enemy tribe believes it's 0, and you strategically convince your tribe that it is 10... you may find yourself slowly updating towards 6, 7, or 8... even if you keep remembering that 10 was a lie.

Anyway, as long as we both agree that this is not a good way to discover truth, the specific details are less important.

Comment author: entirelyuseless 24 November 2016 02:40:29AM 0 points [-]

I agree with that, and that is one reason why it is not a good method.

Comment author: TheOnlyAu 02 June 2016 03:15:55PM 5 points [-]

Hi LW Users,

I apologise in advance for not having more to say initially, but I created an account on this website for one reason. I have one proposition/idea to put forth on the discussion section.

I would prefer to wait until I have twenty karma so that I may post the proposition/idea there, so I hope that your curiosity has been sparked enough, otherwise let me know.

Thanks so much for reading :)

Comment author: gjm 02 June 2016 05:22:33PM -1 points [-]

Welcome. You will only accumulate karma by having people upvote your comments, so if your goal is as you describe then I'm afraid you'll have to participate in other ways too before you get to show us your idea. (Of course you could put it in a comment in the Open Thread or something if you can't wait.)

Comment author: TheOnlyAu 05 June 2016 04:17:21AM 1 point [-]

Where should I be commenting then? Right here? And where is the open thread? Thank you so much for your help and I look forward to it.

Comment author: Wind 10 February 2016 04:05:38PM *  5 points [-]

Hi. I live in Umeå, Sweden. I have been aware of Less Wrong for some time now. First through HPMoR, and more lately I have been reading posts that my friend has recommended to me. I just recently decided I want to also join the discussion, so i crated this user to be able to comment.

I find it very useful do distinguish between what I call "debate" and "discussion"
"debate" = everyone involved is trying to win, where "win" usually means convincing the audience.
"discussion" = everyone involved is trying to learn the truth.
Less wrong is obviously a place for discussion, but even in a discussion I find the above vocabulary useful. However I don't know if this distinction are common use of these words. What words are commonly used for these concepts on LW?

I am currently thinking about The Worst Argument in the World. But I want to read some more before I decide if I have something relevant to contribute.

And I disagree with Yudkowsky's version of timeless physics. I might say something abut that If I can just find a way to formulate what I want to say. (It is not a language problem. It is more about that I fist have to explain stuff about gauge and symmetries and how sometimes you should not get rid of redundant variables just because you can.)

I am currently writing a thesis in Loop Quantum Cosmology. It is about an alternative ideas about the beginning of the universe. I is really cool but not as cool as it sounds. Due to several reasons I will probably not stay in this field. After my defence I don't know what to do. If some one have a job offer, or a career suggestion, let me know.

Comment author: Vaniver 10 February 2016 08:44:53PM 1 point [-]

Welcome!

After my defence I don't know what to do. If some one have a job offer, or a career suggestion, let me know.

How much programming have you done so far? In my experience physicists tend to make the transition to programming fairly well because they have lots of experience with modeling / reasoning from first principles / mathematical thinking.

Comment author: Wind 14 February 2016 05:08:09PM 1 point [-]

Yes, that is one plan. I have not done much programming, but I have done enough to know that this is something I am capable of learning.

Comment author: KevinGrant 28 November 2015 02:07:21PM 5 points [-]

Hi,

I'm a middle-aged computer scientist/philosopher, who specialized in artificial intelligence and machine learning back in the stone age when I was getting my degrees. Since then I've done a bit of work in probabilistic simulations and biologically inspired methods of problem solving, mostly for industry. I've recently finished writing a book about politics, although God knows if I'll ever sell a copy. Now I'm into a bit of everything. Politics. Economics.

I came here looking for input into a conlang project that I'm working on. Basically it involves the old Sapir-Whorf/Eprime/Loglan dream of creating a language that's better suited for rational cognition than English, and I'm looking for linguistic mechanisms that might aid in this and that need to be built in from the bottom up (since surface mechanisms can be added later). I already know of the three conlangs mentioned above, although I don't speak them, so I'm looking for ideas that aren't contained therein, or that if they are might have been missed by a person without a deep knowledge of the languages. I did a search of the archives here and saw some discussion around this general topic, but nothing of immediate use, although I could easily have missed something.

All ideas welcome.

Comment author: ChristianKl 28 November 2015 07:00:25PM *  1 point [-]

I do have a conlang draft. A few thoughts based on my conlang thinking:

Loglan/Lojban is a language were math was an afterthought. That's likely mistake. If you look at a concept like grandfather, using the word "grand" doesn't make much sense. I think it's better to say something like father-one for grandfather, father-two for great-grandfather. The same way the boss of your boss should be boss-one. Having a grammer in which relationships can be expressed well is very valuable.

I think that loglan attempt to build on existing roots of the widely spoken languages is flawed because it allows less freedom organizing the language effectively. It would be good to have a lot of concepts with 3 letters instead of 5.

In my language draft I started to take concept of graph theory for naming relationships (the structure of the words matters but the actual word is provisional):

bei node in same graph
cai node parent
doi node children

beiq relative
caiq parent
doiq son/daughter

bei person employed in the same company
caiß boss (person with authority to order)
doiß direct (person who can be ordered)

Once you understand that structure and learn the new word "fuiq" for sibling, you can guess that a direct coworker is called fuiß. Of the in a graph notes that share the same parent note are "fui".

I like grouping concepts this way where I can go from parent to son/daughter simply by going one forward in the alphabet and replace "c" with "d" and "a" with "o" ("i" get's skipped because the word ends in "i").

I did use a similar principle for naming numbers: ba 0
ce 1
di 2
fo 3
gu 4
ha 5
je 6

For the number I also gave adding a "q" meaning. It turn the number into base 16. Base 16 numbers are later quite useful if you want to make an expression like north-east. At the moment pilots use phrases based on the clock to navigate: "There's a bird at 2 o'clock." It's much better to bake numbers more centrally into the language.


In case you haven't seen it http://selpahi.de/ToaqAlphaPrimer.html is a nice draft for a new language. I like how the language makes every sentence end in an evidential. In it I think he makes a mistake that he doesn't use capital letters but non-asci character instead.

I think that it's great that his language doesn't follow the Lojban place system but uses prepositions like a normal language.

Comment author: KevinGrant 29 November 2015 09:27:46AM 1 point [-]

Also, the topic is now up and running in the regular "discussion" area.

Comment author: KevinGrant 29 November 2015 02:50:41AM 1 point [-]

It sounds like you were trying to construct an a-priori conlang, in which the meaning of any word could be determined from its spelling, because the spelling is sufficient to give the word exact coordinates on a concept graph of some sort. I thought about this approach some time ago, but was never able to find a non-arbitrary concept graph to use, or a system of word formation that didn't create overly long or unpronounceable words.

I was originally thinking about including non-ascii characters, but eventually compromised on retaining English capitals instead. The biggest problem that any conlang faces is getting people to use it, and anything that makes that more difficult, such as requiring changes to the standard American keyboard, needs to be avoided unless it's absolutely necessary.

Comment author: acrmartins 29 August 2015 09:36:26AM 5 points [-]

Hi. Just leaving a few comments about me and what I have been doing in terms of research people here will find interesting. I joined just a couple of days ago so I am not so sure about styles, this seems to be the proper place for a first post and I am guessing the format and contents are free.

While I was once a normal theoretical physicist, I was always interested in the questions of why we believe in some theories, I think that for a while I felt that we were not doing everything right. As I went through my professional life, I had to start interacting with people from different areas and that meant a need to learn Statistics. Oddly, I taught myself Bayesian methods before I even knew there was something called hypothesis tests.

Today, my research involves parts of Opinion Dynamics (I am still a theoretical physicist there, somehow), and I have been starting to make more and more use of results of human cognition experiments to understand a few things as well as a Bayesian framework to generate my models. I have also been doing some small amount of research in evolutionary models. But my real main interest in the moment can easily be seen in a paper that I just put online at the ArXiv preprint site. Indeed, while I already knew the site and found it interesting, time limits meant I never really planed to write anything here. So, the reason I actually joined the site now is because I think you will find the whole discussion in the paper quite interesting. I do think that my main conclusion there about human reasoning and its consequences is so obvious that it always amazes me how deep our instincts must be for it to have remained hidden.

There is a series of biases and effects that happen when we decide to support an idea. And those biases make us basically unable to change our minds or, in other words, to learn. In the paper I inspect the concept of choosing an idea to support from what we know about rationality. I conduct a small simulation experiment with different models that suggest that our desire to have one only idea is behind extremist points of view, and I finally discuss the consequences of it all for scientific practice. There is a book planned, with many more details and aimed at the layperson, the first draft version is complete, but it will still take a while before the book is out. The article is in drier prose, of course.

Anyway, while I am still submitting it for publication, the preprint is available at

http://arxiv.org/abs/1508.05169

The name of the article is "Thou shalt not take sides: Cognition, Logic and the need for changing how we believe", I do think you people here will have a lot of fun with it.

Best, André

Comment author: BiasedBayes 24 September 2015 12:36:24PM 1 point [-]

Thanks for the link! Very nice publication!

Comment author: RibbonGraph 12 April 2016 12:01:14PM 4 points [-]

Hi friends,

I'm Chris :D I've been lurking on and off for a few months now (after hearing about LW from some of my friends at uni, reading some SlateStarCodex, and devouring HPMOR in less than a week) and have decided it's about to take the plunge into the scary world of commenting. (It's a bit scary being a somewhat smart person among people who are much, much smarter)

My academic background: growing up in my family meant I picked up a lot of random stuff, but at uni I have been studying pure mathematics and a bit (pun intended) of computer science.

What motivates me: I'm very passionate about Raising the Sanity Waterline. If I learn - for the first time - something which I think is important, I get this sudden panic of "Why have I only learned this now?! Everyone should know this!". And I get very excited when I'm helping other people learn stuff I've learned.


Longer version of background: My parents have worked as Protestant Christian theological educators (i.e. training pastors and church leaders) in the Middle East since before I was born. They have always been very keen on learning as a lifelong project (a lot of my dad's work is applying evidence-based teaching research to theological education). So - somewhat like Harry Potter in HPMOR - our house has always been full of books. To add to that, I was privileged to get to meet a lot of people from very different worlds: from my Muslim close friends at school to some of my parents' suppporters in the US who have never gone far from their home state. This meant I encountered drastically different worldviews and cultural approaches to thinking, and often found it frustrating how poorly people understood each other. Thanks to my parents' influence, I also unconsciously gravitated towards people who were interested in how the world works.

Since leaving for Australia at 18 for study, I have spent much of my university life learning about things other than my specialisation, both from smart friends and from the internet. So this has meant I have changed my mind about quite a few things already.

I look forward to changing my mind about many more things, and learning completely new things!

Comment author: gjm 13 April 2016 04:19:40PM 1 point [-]

It's a bit scary being a somewhat smart person among people who are much, much smarter

The LW commentariat is indeed smart, but probably not as smart relative to you as you are suggesting.

Comment author: cwl 13 March 2016 09:48:07AM 4 points [-]

Hello LW,

My name is Colton. I'm a 22 year old electrical engineering student from Missouri who found Less Wrong about a year ago through Slate Star Codex and binged most of the sequences.

I have been interested in the study of bias and how to avoid it since I read the book Predictably Irrational a few years back. I also consider myself quite academic for an engineer, with a good deal of physics, math, and computer science theory under my belt.

Comment author: zoedith 04 February 2016 11:27:39PM 4 points [-]

Hey LW. I found this site about an hour ago while browsing Quora (I know, I know) and the concept is really appealing to me. Currently I'm studying for my undergrad degree in Neuroscience, not sure exactly what direction I want to take it in afterwards. Artificial neural networks and AI in general are intriguing to me. Being able to actually explain/understand concepts like consciousness and perception of reality in a material sense is sort of my (possibly idealistic) goal. Empiricism is very dear to me, but I think in order to fully explore any idea you can't pit it against rationalism--if that's even a thing that people still do. It's likely that I'll do more lurking than anything else on here, but I'm looking forward to it anyways!

Comment author: InhalingExhaler 31 January 2016 06:36:58PM *  4 points [-]

Hello.

I found LessWrong after reading HPMoR. I think I woke up as a rationalist when I realised that in my everyday reasoning I always judjed from the bottom line not considering any third alternatives, and started to think what to do about that. I am currently trying to stop my mind from always aimlessly and uselessly wandering from one topic to another. I registered on LessWrong after I started to question myself on why do I believe rationality to work, and ran into a problem, and thought I could get some help here. The problem is expressed in the following text (I am ready to move it from welcome board to any other suitable one if needed):

John was reading a book called “Rationality: From AI to Zombies” and thought: “Well, I am advised to doubt my beliefs, as some of them may turn out to be wrong”. So, it occurred to John to try do doubt the following statement: “Extraordinary claim requires extraordinary evidence”. But that was impossible to doubt, as this statement was a straightforward implication of the theorem X of probability theory, which John, as a mathematician, knew to be correct. After a while a wild thought ran through his mind: “What if every time a person looks at the proof of the theorem X, the Dark Lords of the Matrix alter the perception of this person to make the proof look correct, but actually there is a mistake in it, and the theorem is actually incorrect?” But John didn’t even consider that idea seriously, because such an extraordinary claim would definitely require extraordinary evidence.

Fifteen minutes later, John spontaneously considered the following hypothetical situation: He visualized a religious person, Jane, who is reading a book called “Rationality: From AI to Zombies”. After reading for some time, Jane thinks that she should try to doubt her belief in Zeus. But it is definitely an impossible action, as existence of Zeus is confirmed in the Sacred Book of Lightning, which, as Jane knows, contains only Ultimate and Absolute Truth. After a while a wild thought runs through her mind: “What if the Sacred Book of Lightning actually consists of lies?” But Jane doesn’t even consider the idea seriously, because the Book is surely written by Zeus himself, who doesn’t ever lie.

From this hypothetical situation John concluded that if he couldn’t doubt B because he believed A, and couldn’t doubt A because he believed B, he’d better try to doubt A and B simultaneously, as he would be cheating otherwise. So, he attempted to simultaneously doubt the facts that “Extraordinary claim requires extraordinary evidence” and that “Theorem X is proved correctly”.

As he attempted to do it, and succeeded, he spent some more time considering Jane’s position before settling his doubt. Jane justifies her set of beliefs by Faith. Faith is certainly an implication of her beliefs (the ones about reliability of the Sacred Book), and Faith certainly belongs to the meta-level of her thinking, affecting her ideas about existence of Zeus located at the object level.

So, John generalized that if he had some meta-level process controlling his thoughts and this process was implied by the very thought he was currently doubting, it would be wise to suspend the process for the time of doubting. Because not following this rule could make him not to lose some beliefs which, from the outside perspective, looked as ridiculous as Jane’s religion. John searched through the meta-level controlling his thoughts. He was horrified to realize that Bayesian reasoning itself fitted the criteria: it was definitely organizing his thought process, and its correctness was implied by the theorem X he was currently doubting. So he was sitting, with his belief unsettled and with no ideas of how to settle it correctly. After all, even if he made up any idea, how could he know that it wasn’t the worst idea ever intentionally given to him by the Dark Lords of the Matrix? He didn’t allow himself to disregard this nonsense with “Extraordinary claim requires extraordinary evidence” – otherwise he would fail doubting this very statement and there would be no point in this whole crisis of faith which he deliberately inflicted on himself…

Jane, in whose imagination the whole story took place, yawned and closed a book called “Rationality: From AI to Zombies”, lying in front of her. If learning rationality was going to make her doubt herself out of rationality, why would she even bother to try that? She was comfortable with her belief in Zeus, and the only theory which could point out her mistakes apparently ended up in self-annihilation. Or, shortly, who would believe anyone saying “We have evidence that considering evidence leads you to truth, therefore it is true that considering evidence leads you to truth”?

Comment author: RichardKennaway 01 February 2016 11:58:42AM 2 points [-]

Welcome to Less Wrong!

My short answer to the conundrum is that if the first thing your tool does is destroy itself, the tool is defective. That doesn't make "rationality" defective any more than crashing your first attempt at building a car implies that "The Car" is defective.

Designing foundations for human intelligence is rather like designing foundations for artificial (general) intelligence in this respect. (I don't know if you've looked at The Sequences yet, but it has a lot of material on the common fallacies the latter enterprise has often fallen into, fallacies that apply to everyday thinking as well.) That people, on the whole, do not go crazy — at least, not as crazy as the tool that blows itself up as soon as you turn it on — is a proof by example that not going crazy is possible. If your hypothetical system of thought immediately goes crazy, the design is wrong. The idea is to do better at thinking than the general run of what we can see around us. Again, we have a proof by example that this is possible: some people do think better than the general run.

Comment author: CCC 01 February 2016 11:29:47AM 1 point [-]

After a while a wild thought ran through his mind: “What if every time a person looks at the proof of the theorem X, the Dark Lords of the Matrix alter the perception of this person to make the proof look correct, but actually there is a mistake in it, and the theorem is actually incorrect?”

As soon as the Dark Matrix Lords can (and do) directly edit your perceptions, you've lost. (Unless they're complete idiots about it) They'll simply ensure that you cannot perceive any inconsistencies in the world, and then there's no way to tell whether or not your perceptions are, in fact, being edited.

The best thing you could do is find a different proof and hope that the Dark Lord's perception-altering abilities only ever affected a single proof.

John searched through the meta-level controlling his thoughts. He was horrified to realize that Bayesian reasoning itself fitted the criteria: it was definitely organizing his thought process, and its correctness was implied by the theorem X he was currently doubting. So he was sitting, with his belief unsettled and with no ideas of how to settle it correctly. After all, even if he made up any idea, how could he know that it wasn’t the worst idea ever intentionally given to him by the Dark Lords of the Matrix?

At this point, John has to ask himself - why? Why does it matter what is true and what is not? Is there a simple and straightforward test for truth?

As it turns out, there is. A true theory, in the absence of an antagonist who deliberately messes with things, will allow you to make accurate predictions about the world. I assume that John cares about making accurate predictions, because making accurate predictions is a prerequisite to being able to put any sort of plan in motion.

Therefore, what I think John should do is come up with a number of alternative ideas on how to predict probabilities - as many as he wants - and test them against Bayesian reasoning. Whichever allows him to make the most accurate predictions will be the most correct method. (John should also take care not to bias his trials in favour of situations - like tossing a coin 100 times - in which Bayesian reasoning might be particularly good as opposed to other methods)

Comment author: MartinWade 06 January 2016 05:43:44PM *  4 points [-]

Salutations! I've been reading Less Wrong for three or four years now without registering - ever since stumbling across a supremely accessible explanation of Bayes Theorem - and suddenly felt I might have something to add. I feel significantly more cynical than most of the posters here, but endeavor to keep my pessimism grounded.

My parents raised me rationalist (not merely atheist), encouraging an environment where questions were always more important than answers and everyone was willing to admit that "I don't know." I spent the requisite few years in my adolescence imagining I knew everything, but that delusion passed. Then I dropped out of three colleges - on scholarships, making those hard lessons less expensive than they might have been - and today I run the inter-branch delivery department of a medium-sized county library system.

I've got a stubborn fascination with philosophical materialism and behavioral neuroscience, with a recent focus on the linguistic nature of consciousness. I think that language in general - and narrative in particular - is a compression algorithm for transmitting complex ideas like "We ought to go the store." My linguistic memory map of what you already know means I don't assume I have to tell you which store, or why, or how.

Such maps are made of stories, and that means they required a protagonist. I've come to believe that consciousness is systematized experience of being that protagonist, molded by evolution to make communication faster and easier. Which makes consciousness the character the brain plays when it needs to work with other brains, and the set of mental tools with which narrative memories are compressed for later storytelling.

At any rate, I'm here to continue to have all of my perspectives challenged, and with this account I suppose I can also start challenging perspectives.

Comment author: ChristianKl 06 January 2016 07:09:37PM 1 point [-]

How about writing an article where you explain why you hold that belief? How would reality look like if the belief would be wrong? What sorts of predictions can be made with it?

Comment author: crmflynn 02 November 2015 02:30:20AM 4 points [-]

I have been lurking around LW for a little over a year. I found it indirectly through the Simulation Argument > Bostrom > AI > MIRI > LW. I am a graduate of Yale Law School, and have an undergraduate degree in Economics and International Studies focusing on NGO work. I also read a lot, but in something of a wandering path that I realize can and should be improved upon with the help, resources, and advice of LW.

I have spent the last few years living and working in developing countries around the world in various public interest roles, trying to find opportunities to do high-impact work. This was based around a vague and undertheorized consequentialism that has been pretty substantially rethought after finding FHI/MIRI/EA/LW etc. Without knowing about the larger effective altruism movement (aside from vague familiarity with Singer, QALY cost effectiveness comparisons between NGOs, etc.) I had been trying to do something like effective altruism on my own. I had some success with this, but a lot of it was just the luck of being in the right place at the right time. I think that this stuff is important enough that I should be approaching it more systematically and strategically than I had been. In particular, I am spending a lot of time moving my altruism away from just the concrete present and into thinking about “astronomical waste” and the potential importance of securing the future for humanity. This is sort of difficult, as I have a lot of experiential “availability” from working on the ground in poor countries which pulls on my biases, especially when faced with a lot of abstraction as the only counterweight. However, as stated, I feel this is too important to do incorrectly, even if it means taming intuitions and the easily available answer.

I have also been spending a lot of time recently thinking about the second disjunct of the simulation argument. Unless I am making a fundamental mistake, it seems as though the second disjunct, by bringing in human decision making (or our coherent extrapolated volition, etc.) into the process, sort of indirectly entangles the probable metaphysical reality of our world with our own decision making. This is true as a sort of unfolding of evidence if you are a two-boxer, but it is potentially sort-of-causally true if you are a one-boxer. Meaning if we clear the existential hurdle, this is seemingly the next thing between us and the likely truth of being in a simulation. I actually have a very short write-up on this which I will post in the discussion area when I have sufficient karma (2 points, so probably soon…) I also have much longer notes on a lot of related stuff which I might turn into posts in the future if, after my first short post, this is interesting to anyone.

I am a bit shy online, so I might not post much, but I am trying to get bolder as part of a self-improvement scheme, so we will see how it goes. Either way, I will be reading.

Thank you LW for existing, and providing such rigorous and engaging content, for free, as a community.

Comment author: Eigengrau 06 October 2015 11:06:14PM 4 points [-]

Hello LW! Long time lurker here. Got here from HPMOR a few years ago now. This is one of my favourite places on the internet due to its high sanity waterline and I thought I'd sign up so I could participate here and there (plus I finally came up with a username I like!). I've got a B.Sc. in math with a concentration in psychology (apparently that is a thing you can get, I didn't know either) and my other passions are music, film, humor, and being right all the time ;)

Thanks to LW and the rest of the rationality blogosphere, I've added effective altruism to my life goals. I've been wondering lately how we might shift the cultural norm from "boy I sure hope I have a big house and drive a fancy sports car by the time I'm 30" to "boy I sure hope I'm donating lots of money to worthy charities by the time I'm 30".

Comment author: varialus 04 October 2015 12:38:30AM 4 points [-]

Hi! I'm interested in curing death or at least contributing to the cure. I'm an ok computer programmer and I'm preparing to go to school this spring to work on a bachelor's degree in Biomedical Engineering with a minor in Cognitive Science. I'd like to make friends with someone who is also at the early planning stages of pursuing a similar degree, and yeah, I do realize just how specific those requirements are, but it doesn't hurt to keep an eye out just in case. I'm in a fairly good place in my life to pursue my education, but I don't yet know how it's going to go. If you're in a good place to go to school, but are scared or need if you need some help deciding that it's what you want to do, instead of stressing or worrying about it, how about we work on it together? I'm currently reviewing a number of educational topics, primarily through Khan Academy.

I discovered the joys of cognitive science while reading Harry Potter and the Methods of Rationality. I've always fancied myself a fairly rational person, but I've not yet studied it formally.

Comment author: masters02 24 September 2015 08:10:17AM 4 points [-]

Hello all!

I'm a graduated International Relations student from London. I took a year off after graduation to learn how to manage my finances and invest in the stock market. Because of that, I came across my life hero, Charlie Munger, the vice-chairman of Berkshire Hathaway. He is a machine of rationality and is by far one of the wisest men (if not the wisest) alive. He wrote an essay called, "The psychology of human misjudgement" (http://law.indiana.edu/instruction/profession/doc/16_1.pdf) which I implore all rationality-seekers to devour. This essay changed my life, and I have never looked back.

Charlie said that we all have a moral obligation to be rational. So, here I am :)

Comment author: Vaniver 24 September 2015 04:58:45PM *  2 points [-]

Welcome!

One of my primary pieces of exposure to Munger is Peter Bevelin's book, Seeking Wisdom from Darwin to Munger, which I think you might enjoy--as I recall, it draws from the same Heuristics and Biases literature as many other things (like Munger's essay) but has enough examples that don't show up in the more standard works (Thinking and Deciding, Thinking Fast and Slow, etc.) to be worthwhile on its own.

Comment author: masters02 29 September 2015 06:32:39PM 2 points [-]

Thanks for the recommendation. I've seen Bevelin's book come up many times during my Munger-searches, but I haven't gotten around to reading it yet. I'm sure I'll more than enjoy it.

Comment author: Laszlo 23 August 2015 01:00:36PM 4 points [-]

Hello!

I first heard about LW through a SomethingAwful thread. Not the most auspicious of introductions, but when I read some of your material on my own instead of receiving it through the sneerfilter, I found myself interested. Futurology and cognitive biases are two topics that are near and dear to my heart, and I hope to pick up some new ideas and perhaps even new ways of thinking here. I've also had some thoughts about Friendly AI which I haven't seen discussed yet, and I'm excited to see what holes you guys can poke in my theories!

Comment author: bozj 18 November 2016 06:34:00AM 3 points [-]

Hello all,

I am just an other lurker here. Most of the times, I am found in LW slack group. I think, I better had introduced myself earlier. I have zero karma so I am unable to post anything at all. It would be better for me to explore how LW website works.

-Best

Comment author: petermac222 16 November 2016 07:35:39PM 3 points [-]

Hello,

Browsing the web I found this site. I think it will be fun to indulge a bit and read more.

I'm retired, living on a sailboat and enjoying life. At this time I can't think of any topic of interest in the context of discussions, but I like the reading and I'm sure I'll jump in somewhere to contribute more down the road.

Peter

Comment author: Lumifer 16 November 2016 09:43:45PM 1 point [-]

Welcome :-) You don't live on a Macgregor 22, do you?

Comment author: petermac222 17 November 2016 06:53:56PM 1 point [-]

I live on a Nantucket Island 38. Just big enough to be roomy, and just small enough to sail about by myself. I'm just getting into the living on it part. Had the boat 4+yrs but only moved in full time this past July. Hope to start traveling on it more in 2017, targeting the Pacific Northwest for my first trips, but we'll see, I don't actually have a hard schedule, just rolling along at my own pace.

Comment author: Gyrodiot 16 November 2016 09:28:33PM 1 point [-]

Welcome, Peter!

Comment author: jstncrri 14 November 2016 04:58:33AM 3 points [-]

Hey kids. I'm a young Canadian philosophy student trying to diversify my understanding of the world-as-it-is. Pressing my way through Rationality A-Z slowly, but while doing university, progress can be slow. I've been visiting the site frequently for a few months, but typically feel too uninformed to comment. I appreciate the (surprise) lack of bias and openness to critical thinking here, that I've found mysteriously vacant from my social, business and academic circles. I've gone through the process of being contrarian, then being a 'communist' (then reading Camus) then being lost in a world where it's difficult to find thinknig happening, at all. I come here to remind myself how to be (hah) less wrong, and see what cool things other intelligent folks are working on. If anyone has any links to blogs or sources that are interesting to someone trying to learn about...everything, I'm always looking for more networks to look to for information. Also what does a philosophy student who doesn't want to fall prey to the philosophy tropes do? (I already work at a pizza place.)

Comment author: hairyfigment 15 November 2016 01:17:28AM 0 points [-]

I don't know if this is a good answer to your last question, but you could ask what "philosophy" might look like today if Aristotle had never tutored the Emperor of the known world. I tend to think it wouldn't exist - as an umbrella category - nor should it.

Comment author: jstncrri 15 November 2016 05:23:37AM 1 point [-]

I see it more as the underlying theory of theory, an aspect of all things. I chose to study it with different intentions, but now I'm just capitalizing on my ability to understand theory to learn the theories important too as many different disciplines as possible. I read somehere that philosophers have a responsibility to learn as much science as they can if they want to be relevant. I'm trying.

Comment author: JohnReese 08 November 2016 01:58:47AM 3 points [-]

Hiya! I am currently a postdoc in the neurosciences, with a computational focus. Dealing with the uncertainties and vicissitudes attendant upon one still plodding on along the path to "nowhere close to tenure-track". My core research interests include decision making, self-control/self-regulation, goal-directed behaviour, RL in the brain etc. I am quite interested in AI research, especially FAI and while I am aware of the broad picture on AI risk, I would describe myself as an optimist. On the social side of things, I am interested in understanding why people believe the things they do (in so far that I am not trying to figure this out as I dangle from the tree...) and my approach has always been one of asking open-ended questions to refine my model of "where someone is coming from" and this helps me have civil discussions with people whose views would be incompatible with mine. I am truly glad that civil discourse and collective truth-seeking are community norms here...one of my biggest pet peeves is that this is what "science'' should be about, as an enterprise, but in modern academia, one seldom feels as though one is part of such a community. Those who disagree, or have had much better times in academia are welcome to disagree. When I am not thinking about computational models, AI, ethics, or whatnot, I pretend to hoard crumpets, drink lots of tea and coffee and make trips to and from the DC Universe (the one that existed prior to Flashpoint). I discovered Scott Aaronson's fantastic blog a year ago, and this was followed by trips to SSC - and this is how I found LW. Love all 3 and now glad to join LW.

Oh, for some reason I am unable to see the button for voting on posts/comments...is there a Karma threshold to be crossed before one can vote?

Comment author: rmoehn 14 July 2016 02:18:50AM 3 points [-]

Hi! I signed up to LessWrong, because I have the following question.

I care about the current and future state of humanity, so I think it's good to work on existential or global catastrophic risk. Since I've studied computer science at a university until last year, I decided to work on AI safety. Currently I'm a research student at Kagoshima University doing exactly that. Before April this year I had only little experience with AI or ML. Therefore, I'm slowly digging through books and articles in order to be able to do research.

I'm living off my savings. My research student time will end in March 2017 and my savings will run out some time after that. Nevertheless, I want to continue AI safety research, or at least work on X or GC risk.

I see three ways of doing this:

  • Continue full-time research and get paid/funded by someone.
  • Continue research part-time and work the other part of the time in order to get money. This work would most likely be programming (since I like it and am good at it). I would prefer work that helps humanity effectively.
  • Work full-time on something that helps humanity effectively.

Oh, and I need to be location-independent or based in Kagoshima.

I know http://futureoflife.org/job-postings/, but all of the job postings fail me in two ways: not location-independent and requiring more/different experience than I have.

Can anyone here help me? If yes, I would be happy to provide more information about myself.

(Note that I think I'm not in a precarious situation, because I would be able to get a remote software development job fairly easily. Just not in AI safety or X or GC risk.)

Comment author: Beau 19 April 2016 05:29:00PM 3 points [-]

Hi. I'm Bernardo, a business student from Brazil. I came across Less Wrong from a answer to a thread on Quora (https://www.quora.com/How-would-you-estimate-the-number-of-restaurants-in-London). It got me interested in Fermi Estimates and I'm surfing Less Wrong to read about it.

I'd love to translate those articles on Fermi Estimates to Portuguese to add to the translated pages list. How do I do that?

Comment author: Christiano 14 November 2016 02:06:51AM 1 point [-]

Hello Bernardo, I'm Christiano from Brazil too! Nice to see a brazilian here! Did you manage to translate the article? I can help you with English-Portuguese revision or even help you with the translation.

Comment author: Menilik 03 April 2016 10:39:11PM 3 points [-]

Hello from NZ. So I'm basically, I'm here to promote my... Jokes, I came across this website from a Wait But Why article I was doing research on (Cryonics). The comments here are next level awesome, people share ideas and I feel like the moderators aren't ruled by one discourse or another. So yeah I decided to jump on in and check it out.

I enjoy Science, Learning, Entrepreneur stuff, and better ways of looking at the world.

Comment author: MakoYass 01 May 2016 03:41:17AM 2 points [-]

Menilik Dyer! I thought it might be you! We met at a Mum's Garage thing (I was the one wearing no shoes and a lot of grey). So cool to see you here. Welcome to the mouth of this bottomless rabbithole that is modern analytical futurism. I'd hazard you already have some sense of how deep it goes.

If anyone's reading this; Menilik is a badass. He once successfully built a business by picking a random market sector he knew nothing about and asking people on the ground what they might need.

Comment author: Starglow 16 October 2015 11:03:01PM *  3 points [-]

Hi! I've been lurking around here for a while; I'm quite the beginner and will be further lurking rather than contributing. A few months ago, I found and played a nifty little game that asked you to make guesses about statistics and set an interval of confidence, is mostly about updating probabilities based on new information and that ultimately requires you to collect information to decide whether a certain savant is more likely in his cave or at the pub. I've been wanting to have another look at it, but I have been entirely unable to find it again.

Could anyone point me to it? I'm fairly certain it was from this website. Thanks for the help, and keep up the interesting posts!

EDIT: http://cassandraxia.com/projs/advbiases/ in case anyone else is looking for it.

Comment author: alexander_poddiakov 03 October 2015 07:56:35AM *  3 points [-]

Hello I am from Department of Psychology of Higher School of Economics. I study problem solving, systems thought, and help and counteraction in social interactions. Both rationality and irrationality are important here.

Web: http://www.hse.ru/en/staff/apoddiakov#sci, http://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=426114

Comment author: hoofnail 10 September 2015 07:09:04PM 3 points [-]

Hi. I have only ever browsed one thread on this website, before. i used to like arguing a lot, but I lost my fervor when I felt like the validity of my argument and ability to defend myself in argument didn't and doesn't matter to most. It makes me sad. I only want to make everyone happy and able to cope with their pain, but everyone rejects me.

I don't have much of a personality beyond my liking logic a lot. All I know is logic, even if most people disagree with me. I am saddened by the fact that I feel my life only truly began in my late teens when I randomly came across the knowledge I needed to gain the opinions I have today. I never want anyone else to experience the sadness I have again. I want to change the world, via argument. Hello.

I once had a chance to make friends like me, but I threw that chance away, because the day that opportunity fell into my lap was the day I formally lost my faith in humanity, and lost my fervor to change the world....

Comment author: Sarunas 22 July 2015 06:13:45PM *  3 points [-]

META. LessWrong Welcome threads have changed very little since late 2011. Should something be updated?

Comment author: Vaniver 22 July 2015 11:28:55PM 9 points [-]

This link shows you all new posts in both Main and Discussion, by title and vote count and so on, and is my preferred landing page for LW. I don't think there are any obvious links to it, and this thread seems like a fine place to do so.

Comment author: Sarunas 23 July 2015 05:04:00PM 4 points [-]

Added.

Comment author: [deleted] 23 July 2015 09:14:34AM *  5 points [-]

What about the list of users who offered to provide English assistance? If this is a useful service to members it may be worth revisiting as most of the listed members seem to be inactive (at least from looking at post/comment history): Randaly has returned to posting recently, but shokwave hasn't posted in more than a year, Barry Cotter and Normal_Anomaly's last posts were in April.

Comment author: Sarunas 23 July 2015 05:05:08PM *  5 points [-]

PMed all of them. Does anyone else also want to volunteer?

Comment author: Sarunas 14 August 2015 09:43:19PM *  1 point [-]

So far only one person (Randaly) has replied. Does any native speaker want to volunteer? Edit: two people (Randaly and Normal_Anomaly)

Comment author: Viliam 28 July 2015 08:44:21AM 3 points [-]

At the end of "SEQUENCES:" paragraph you could add: They are also available in a book form.

Comment author: Sarunas 28 July 2015 10:50:33AM *  1 point [-]

Done. Should I also add a link to the Slovak translation of the book?

Comment author: Viliam 28 July 2015 02:31:46PM 2 points [-]

The translation is irrelevant for 99.9% readers, so I guess no.

Comment author: ArisC 22 January 2017 07:48:25AM 2 points [-]

Hello from Beijing.

I found out about Less Wrong from Slate Star Codex. I also read HPMOR last year, but hadn't realised there was a connection between that and Less Wrong.

I am posting here because I have been thinking about morality. I get into a lot of debates that all boil down to the fact that people hold a very firm belief in a particular moral principle, to the extent that they would be happy to force others to live in accordance to that principle, without evaluating whether this principle is subjective or rational.

In response to this, I have come up with a framework for evaluating moral theories, and I would like to hear the rationalist community's feedback. Briefly, what I propose is that a moral theory needs to meet three criteria: a) the ethical principles that comprise it must not be internally contradictory; b) its ethical principles must be non-arbitrary as far as possible (so, "be good to other people just because" is not good enough); and c) if the theory's principles are taken to their logical conclusion, they must not lead to a society that the theory's proponents themselves would consider dystopian.

I would like to hear people's thoughts on this - if you think it's intriguing, I am happy to submit an article to expand on my rationale for proposing this framework.

Best, Aris

Comment author: onlytheseekerfinds 22 January 2017 12:23:12PM 0 points [-]

It seems like (a) and (c) are easily granted, but what's your definition of "non-arbitrary", and how should we determine if that definition is itself a non-arbitrary one?

This topic is one I enjoy thinking about so thank you for your post :)

Comment author: ArisC 22 January 2017 02:15:39PM 0 points [-]

Thanks for your comment!

My definition of non-arbitrary would be, can we derive your principle from facts on which everyone agrees? I can propose two such principles: a) liberty - in the absence of moral absolutes, the only thing you can say is live and let live, as to do otherwise is to presuppose the existence of some kind of moral authority; or b) survival of the fittest - there is no moral truth, and even liberty is arbitrary - why should I respect someone else's liberty? If I am stronger, I should feel free to take what I can.

That said, I think there could also be an argument for some sort of virtue ethics - e.g. you could argue that perhaps there is absolute truth, and there are certain virtues that will help us discover it. But you'd need to be smarter than me to make a convincing argument in this line of thought.

Comment author: Christiano 14 November 2016 02:24:04AM 2 points [-]

Hello Less Wrong community! I study Statistics at the Federal University of Rio de Janeiro in Brazil. I am oriented by the bayesian probability philosophy because our Department of Statistical Methods is focused in studies of Bayesian Statistics. I found this website during my studies of bayesian philosophy and error. In Rio de Janeiro, we still do not have any rationality community, but in São Paulo there is meetings organized every month in Meetup. I am very exited to spend my time in this community developing and debating philosophy of error!

Comment author: Rachelle11 25 August 2016 07:31:04AM 2 points [-]

Rachelle is an academic consultant at a community college in specializes in helping students with their academic problems, college stress and such. She also works part-time for an online dissertation help at dissertation corp. She’s also a hobbyist blogger and loves to do guest blogging on education or college life related topics.

Comment author: Arielgenesis 24 July 2016 03:50:51PM *  2 points [-]

We'd love to know who you are, what you're doing: I was a high school teacher. Now I'm back to school for Honours and hopefully PhD in science (computational modelling) in Australia. I'm Chinese-Indonesian (my grammar and spelling are a mess) and I'm a theist (leaning toward Reformed Christianity).

what you value: Whatever is valuable.

how you came to identify as an aspiring rationalist or how you found us: My friend who is now a sister under the Fransiscan order of the Roman Catholic Church recommended me Harry Potter and the method of Rationality.

I think the theist community needs a better, more rational arguments for their belief. I think the easiest way is to test it against rational people. I hope this is the right place.

I am interested in making rationality be more accessible to the general public.

I am also interested in developing an ideal, universal curriculum. And I think rationality should be an integral part of it.

Comment author: ArthurRainbow 14 July 2016 04:01:13AM 2 points [-]

Hello from Paris, France.

As many of you, I first discovered all of this by HPMOR (actually, its French translation). I then read entirely Rationality, from AI to Zombie (because, honestly, reading things in order is SO MUCH easier than having 20 tabs open with 20 links I followed on the previous pages). I thought I would finish to read this blog, or at least the sequences, before posting, and then realized it may implies I would never post.

I'm a doctor in Fundamental Computer Science, an amateur writer (in French only), and an LGBT activist who goes into school in order to speak of LGBTphobia and sexism with 119 classes (and counting).

I can't tell right now exactly why I so much like the idea of rationality. I guess that it is unrelated to the fact that I wrote the article https://en.wikipedia.org/wiki/Rational_set recently . Its probably more related to the fact that I love the idea of being a robot, at least, being like I thought a robot was when I didn't know that robot are programmed by humans. I can rationalize it ... by hoping that, rationals methods would help me be more efficient in order to fight LGBTphobia (and probably more efficient to do research and publish, or to write more...) Even if, to tell the truth, I'm not yet convinced that studying rationality is a rational action in order to attains those goals. On the other hand, even if rationality may not be the BEST tool ever in order to attains those goal, I'm more confident in the advise I find here than in the advice of a random self-help book I could find in a shelve of my super market, because I assume some people indeed did research before giving those advise.

Comment author: beatricesargin 22 March 2016 04:22:56PM *  3 points [-]

I'm a creative writer and a virtual assistant, I have been a freelancer for 2years now. From the creative educational environment I'll like to express an interest in becoming more rational, and I found Less Wrong through Intentional Insights.

Comment author: Sarginlove 22 March 2016 04:47:47PM 1 point [-]

Yeah thanks I also believe I could become more rational too by becoming a rational thinker.

Comment author: beatricesargin 22 March 2016 04:52:51PM 2 points [-]

Thanks, I also believe that becoming rational can help me achieve all of my objectives and long term goals.

Comment author: Alia1d 09 May 2016 06:29:04AM 2 points [-]

I’ve found the Welcome thread!

Hi, I’m Alia and I live with my husband in San Jose, California. I found this site via SlateStarCodex and having read Rationality:From AI to Zombies I think this is a fascinating and useful set of concepts and that using this type of reasoning more often is something to aspire to. I want to do more Bayesian calculations so I get more of a feel for them.

I’m also a fundamentalist* Christian. I’m perfectly ready to discuss and defend these beliefs, but I wouldn’t always bring up these beliefs in threads. I’m not trying to deceive or trick anyone, I just don’t want to derail a thread that is actually about something else. I do think it’s possible to be both a rationalist and a Christian as to stay reasonably intellectually consistent.

*(a note on why I choose the identification fundamentalist. Not long after American Christians split into mainline and fundamentalist groups, the fundamentalists got a bunch of bad press focused on certain sub-groups that were anti-intellectual. The other fundamentalists dealt with this by splitting off and re-branding themselves as evangelical. I’m not anti-intellectual and generally in the group that would self identify as evangelical, but I’m choosing to stick with the fundamentalist label for three reasons. 1) I don’t think changing the label or re-branding is a good way to deal with negative affect attached to a word. At best it just avoids the issue rather than solving the problem. 2) I don’t believe in disavowing people because they are unpopular with third parties. While I dis-agree with the anti-intellectuals on some things, the agreement on common core beliefs that lead to the fundamentalist label in the beginning is still there. 3) I think the fundamentalist label provides more clarity. The evangelicals worked hard and successfully to avoid getting over identified with and sub-group or coincidental characteristic. But as a result the label evangelical stayed vague, Individuals and groups that are more in the mainline tradition sometimes call themselves or get called evangelical. On the other hand opponents who wanted to hang on to the negative affect kept calling anything from the original fundamentalist tradition ‘fundamentalist.’ So on the I think fundamentalist will convey the most accurate idea of where I’m coming from theologically.)

Comment author: gjm 09 May 2016 11:35:12AM *  -2 points [-]

Welcome! I applaud your decision to embrace hostile terminology. I don't think you should feel any obligation to bring up your religious beliefs all the time.

If you're interested in the interactions between unashamedly traditionalist religion and rationalism, you might want to drop into the ongoing discussion of talking snakes. Most of it lately, though, has been discussion between people who agree that the story in question is almost certainly hopelessly wrong and disagree about exactly which bits of it offer most evidence against the religion(s) it's a part of, which you might find merely annoying...

[EDITED to add: Aha, I see you've already found that. My apologies for not having noticed that you were already participating actively there.]

Just out of curiosity (and you should feel free not to answer), how "typically fundamentalist" are your positions? E.g., are you a young-earth creationist, do you believe that a large fraction of the human race is likely to spend eternity in torment, do you believe in "verbal plenary inspiration" of the Christian scriptures, etc.?

(Meta-note that in a better world would be unnecessary: it happens that one disgruntled LessWronger has taken to downvoting almost everything I post, sometimes several times by means of sockpuppets. I mention this only so that if you see this comment sitting there with a negative score you don't take it to mean that the LW community generally disapproves of my welcoming you or disagrees with what I said above.)

Comment author: Alia1d 09 May 2016 07:35:14PM 1 point [-]

Fairly typically fundamentalist, I believe in young earth creationism with a roughly estimated confidence level of 70%, a large fraction of the human race destined for eternal torment at about 85% and verbal plenary inspiration at about 90%.

I'm a little more theologically engaged then average but (as is typical in my circles) that mean's I'm more theologically conservative, not less.

Comment author: gjm 09 May 2016 09:24:36PM -1 points [-]

Are those figures derived from any sort of numerical evidence-weighing process, or are they quantifications of gut feelings? (I do not intend either of those as a value judgement. Different kinds of probability estimate are appropriate on different occasions.)

Comment author: Sarginlove 22 March 2016 04:42:45PM 2 points [-]

Am Sargin Rukevwe Oghneneruona, Am from Nigeria a student studying Business Administration and Management in Delta state polytechnic otefe. I am a rational person and this has helped me a lot i really love engaging in certain activities which could make me become a more rational thinker and also improve my knowledge about being rational. I found out about less wrong by reading articles on http://intentionalinsights.org/ and also written by Intentional insight personnel which has helped me alot to build my strength and knowledge of achieving goals and becoming more successful in life. I believe becoming a member of lesswrong.com would also help me in becoming a more rational thinker.

Comment author: mind_bomber 18 August 2015 06:00:22AM *  1 point [-]

Hello everyone,

/u/mind_bomber here from https://www.reddit.com/r/Futurology.

I've been a moderator there for over two years now and watched the community grow from several thousand futurist to over 3.5 million subscribers. As a moderator I've had the pleasure of working with Peter Diamandis, David Brin, Kevin Kelly, and others on several AMA's. I also curate the glossary and post videos, documentaries, talks, and keynotes to the site.

I hope to participate in this community, although the Less Wrong Community is exactly the type of people I would like to see over at https://www.reddit.com/r/Futurology. So if you have a chance please stop by and tell me what you think.

Cheers,

/u/mind_bomber

Comment author: gjm 18 August 2015 05:36:16PM 4 points [-]

Are you sure you have enough copies of that link there? There are only four, and two of your paragraphs don't have one.

(If you're trying for some SEO thing, please note that links from LW comments get rel="nofollow" on them and therefore don't provide extra googlejuice. I wouldn't be at all surprised to find that Google gives less weight to a link when it sees several instances of it in rapid succession, because that's a thing spammers do.)

Comment author: Lumifer 18 August 2015 02:24:15PM 2 points [-]

is exactly the type of people I would like to see over at...

What, no movie and a dinner first? X-)

Comment author: [deleted] 22 August 2015 02:42:22AM 1 point [-]

Offense intended: your subreddit mainly consists of hype-trains, please do not advertise it.

Comment author: avwenceslao 23 March 2016 08:29:39AM 1 point [-]

Hi LW! My name is Alex, a Salesperson by profession. I found Less Wrong through Intentional Insights and been here for a couple of months now. I'd like to express my interest in becoming more rational.

Comment author: Secret_Tunnel 04 November 2016 12:02:47AM *  0 points [-]

Hey everybody! My name's Trent, and I'm a computer science student and hobbyist game developer who's been following LessWrong for a while. Finished reading the sequences about a year ago (after blazing through HPMOR and loving it) and have lurked here (and on Weird Sun Twitter...!) since then. Figured I'd make an account and get more involved in the community; reading stuff here makes me more motivated in my studies, and it's pretty entertaining either way!

I'd love to be one of the first people on Mars. Not sure how realistic that goal is or what steps I should even take to make it happen beyond saving $500,000 for a supposed SpaceX ticket and mastering a useful skill (coding!), but it's something to shoot for!

Looking forward to reading the linked posts, I haven't seen a lot of them! Also, is this the newest Welcome thread? It's over a year old...!

Comment author: CCC 04 November 2016 12:01:36PM 1 point [-]

Hi, Trent!

I'd love to be one of the first people on Mars. Not sure how realistic that goal is or what steps I should even take to make it happen beyond saving $500,000 for a supposed SpaceX ticket and mastering a useful skill (coding!), but it's something to shoot for!

Have you heard of the Mars One project?

Comment author: Foo 08 December 2016 10:55:15PM *  1 point [-]

Hello Less Wrong!

My name is Bryan Faucher. I'm a 27 year old from Edmonton (Canada) in the middle of the slow process of immigrating to Limerick (Ireland) where my wife has taken a contract with the University. I've been working in education for the past five years but I'm looking to pursue a masters in mathematical modeling next year, rather than attempting to fight for the right to work in a crowded industry as a non-citizen.

I've been aware of LW for something like six years, having been introduced by an old roommate's SO by way of HPMOR. In that time I've read through the sequences and a great deal of what I suppose could be called the "supplementary content" available on the site, but never found a reason to dive in to the discussion. I don't remember exactly when I created this account, but it was nice to have it waiting for me when I needed it!

I'm joining in now because I was very much grabbed by Sarah Constantin's "A Return to Discussion". I've been a member of a mid-sized discussion forum for over a decade, where I now volunteer my time as an administrator. We've done OK - better than most - in terms of maintaining activity in the face of the web's movement away from forums and bulletin boards, but the tone of our conversations has certainly changed: in may ways sliding through the grooves which Sarah seems to be describing. My purview as admin includes the "serious" discussion section of the form, and I feel I'm fighting a losing battle year over year to maintain "nerd space" in the face of cynical irony and the widespread fear of engagement.

I'm hoping to be inspired by the changes the LW community has set to out to make. To learn from what goes right here, and, in some small way, to contribute to the effort which I think is an important one. Intellectually, I don't have a hope in hell of keeping up with the local heavy hitters, but I can bring a lot of, ya know... grit.

Anyway, thanks for reading. I hope this was a fair place to post this. A new newbie thread seems to be wanting, unless I missed something, and I suppose if nothing else I can wrack up enough karma in the next few days to create one. See you around!

Comment author: pranali 05 August 2016 05:17:22AM 1 point [-]

Hi! i am new and don't know where to ask this question exactly, so I'm asking here..

how do you vote on articles and comments? i can't figure out how!!

(i hope I'm not noticing some obvious button and be embarrassed)

Comment author: Elo 05 August 2016 05:27:30AM -2 points [-]

voting is enabled at 10+ karma. Welcome! You managed to make a post which means you successfully verified your email address (which sometimes stops people).

Comment author: teddy-ak17 18 July 2016 12:03:04PM *  1 point [-]

Hello from a lot of places! :) I'm Chinese (Shanghai), studying in Brighton England, and lives in Vienna Austria (Moving to Prague Czech soon). How I discovered LW is not a very long story.

I have a great interest in artificial intelligence. I was reading James Barrat's 'Our Final Invention' and he mentioned AI-Box experiment which got me excited (because just that morning I was reading an article about the Turing Test and how it's very unreliable in measuring intelligence in machines; AI box experiment might be a better experiment in the future?). Before he elaborated the story in chapter 3 (which i found out later), I googled this experiment and led me to Yudkowsky's website. After reading through the thread on the experiment (some really interesting conversations between Yudkowsky and the challengers; I still disagree with James Higgins who denied most of the questions raised by Yudkowsky and giving ambiguous answers). Then I was curious about the link given to me by google which led me to a publicised log of a AI box experiment. It led me here and i had a look around. A whole website about rationality. It's like i found a gold mine.

I am currently studying ALevels - maths, further maths, computer science and physics. I wish to study computer science with artificial intelligence in the future in uni. My goal is computer science and philosophy at Oxford.

Anyways, in the future I wish to contribute more on this website. I believe that sharing thoughts is the best way to expand our knowledge, better than reading books. I am writing an EPQ on the safety of ASI so hopefully i can get some inspiration from the LW community. My interest in life is easy, just question everything, so please bear with them. :) A rational world should be everyone's goal. With the development of AGI/ASI, i hope our world will be a better place and rationality is the key. I am so happy I found this place and I hope I can help make a difference.

Comment author: dimensionx 06 July 2016 12:11:24PM *  1 point [-]

Hello everyone

I represent a small team whose goal is predicting the future based on artificial intelligence in the economic environment. Some time ago I was looking for a place where I could be useful with our arguments and this community is seems to be the best place for it.

I hope to be useful, to discuss if we missed out something, to test new ideas with a competent audience.

The essence of my questions will be: management system based on artificial intelligence; risk prediction using parsing systems; quantum models predict the likelihood; ideas of the frame prediction systems for various events.

Thank you for attention.

Comment author: WikiLogicOrg 14 May 2016 10:31:27AM *  1 point [-]

Hello!

I am new to this site but judging from HPMOR and some articles I read here, I think I have come to the right place for some help.

I am working on the early stages of a project called WikiLogic which has many aims. Here are some that may interest LW readers specifically:

-Make skills such as logical thinking, argument construction and fallacy recognition accessible to the general public

-Provide a community created database of every argument ever made along with their issues and any existing solutions

-Highlight the dependencies between different fields in academic circles

The project requires knowledge of Bayes networks, linguistics and many more fields that I have little experience of although i am always learning. This is why I am looking for you guys to review the idea and let me know your thoughts. At this stage, unfiltered advise on any aspect of the project is welcome.

The general idea along with a short video can be found on the front page of the main site:

http://www.wikilogicfoundation.org/

Feel free to explore the site and wiki to get a better feel of what I am trying to do. Please forgive poorly written or unfinished parts of the site. It is early days and it seems unproductive to finish before I get feedback that may change its course...

Comment author: Regex 14 May 2016 09:52:05PM *  3 points [-]

Welcome!

I've seen these sorts of argument maps before.

https://wiki.lesswrong.com/wiki/Debate_tools http://en.arguman.org/

It seems there is some overlap with your list here

Generally what I've noticed about them is that they focus very hard on things like fallacies. One problem here is that some people are simply better debaters even though their ideas may be unsound. Because they can better follow the strict argument structure they 'win' debates, but actually remain incorrect.

For example: http://commonsenseatheism.com/?p=1437 He uses mostly the same arguments debate after debate and so has a supreme advantage over his opponents. He picks apart the responses, knowing full well all of the problems with typical responses. There isn't really any discussion going on anymore. It is an exercise in saying things exactly the right way without invoking a list of problem patterns. See: http://lesswrong.com/lw/ik/one_argument_against_an_army/

Now, this should be slightly less of an issue since everyone can see what everyone's arguments are, and we should expect highly skilled people on both sides of just about every issue. That said the standard for actual solid evidence and arguments becomes rather ridiculous. It is significantly easier to find some niggling problem with your opponents argument than to actually address its core issues.

I suppose I'm trying to describe the effects of the 'fallacy fallacy.'

Thus a significant portion of manpower is spent on wording and putting the argument precisely exactly right instead of dealing with the underlying facts. You'll also have to deal with the fact that if a majority of people believe something then the shear amount of manpower they can spend on shoring up their own arguments and poking holes in their opponents will make it difficult for minority views to look like they hold water.

What are we to do with equally credible citations that say opposing things?

'Every argument ever made' is a huge goal. Especially with the necessary standards people hold arguments to. Are you sure you've got something close to the right kind of format to deal with that? How many such formats have you tried? Why are you thinking of using this one over those? Has this resulted in your beliefs actually changing at any point? Has this actually improved the quality of arguments? Have you tried testing them with totally random people off of the street versus nerds versus academics? Is it actually fun to do it this way?

From what I have seen so far I'll predict there will be a the lack of manpower, and that you'll end up with a bunch of arguments marked full of holes in perpetual states of half-completion. Because making solid arguments is hard there will be very few of them. I suspect arguments about which citations are legitimate will become very heavily recursive. Especially so on issues where academia's ideological slates come into play.

I've thought up perhaps four or five similar systems, but none of which I've actually gone out and tested for effectiveness at coming to correct conclusions about the world. It is easy to generate a way of organizing information, but it needs to be thoroughly tested for effectiveness before it is actually implemented.

In this case effectiveness would mean

  • production of solid arguments in important areas
  • be fun to play
  • maybe actually change someone's mind every now and then
  • low-difficulty of use/simple to navigate

A word tabooing feature would be helpful: http://lesswrong.com/lw/np/disputing_definitions/ (The entire Map and Territory, How to Actually Change Your Mind, and A Human's Guide To Words sequences would be things I'd consider vital information for making such a site)

It may be useful for users to see their positions on particular topics change over time. What do they agree with now and before? What changed their mind?

I hope that helped spark some thoughts. Good luck!

Comment author: WikiLogicOrg 17 May 2016 08:12:57PM *  1 point [-]

Thanks for an excellent, in-depth reply!

https://wiki.lesswrong.com/wiki/Debate_tools

Brilliant resource! Thanks for pointing it out.

You bring up a few worries although i think you also realize how i plan to deal with them. (Whether i am successful or not is another matter!)

One problem here is that some people are simply better debaters even though their ideas may be unsound

One part of this project is to make some positive aspects of debating skills easy to pick up by newbies using the site. Charisma and confidence are worthless in a written format and even powerful prose are diluted to simple facts and reasoning in this particular medium.

It is significantly easier to find some niggling problem with your opponents argument than to actually address its core issues

In my mind, if a niggling issue can break an argument then it was crucial and not merely 'niggling'. If the argument was employing it but did not rely on it, then losing it wont change its status. Being aware of issues like the 'fallacy fallacy' is useful in time-limited oral debates but in this format its ok to attack a bad argument on an otherwise well supported theory. The usual issue is it allows ones bias to come into play and makes the opponent feel the whole argument is weak. But this is easily avoided when the node remains glowing green to signify it is still 'true'.

manpower is spent on wording and putting the argument precisely exactly right instead of dealing with the underlying facts

Is this so bad? We are used to being frugal with a resource like manpower because its traditionally been limited, but i believe you can overcome that with the world wide reach offered by the internet. People will only concentrate on what they are passionate about which means the most contentious of arguments will also get the most attention to detail. Most people accept gravity so it wont get or need as much attention. In the future if a new prominent school of thought is formed attacking it, then it may require a revisit from those looking to defend it.

[limited manpower] ...will make it difficult for minority views to look like they hold water

I think the opposite is true. In most other formats, such as a forum, the one comment can easily be drowned out. Here there will simply be two different ideas. More people working on one will help of course but they cannot conjure good arguments from nothing. We also have to have faith (the good kind) in people here and assume that they will be willing to remove bad arguments even if they support the overall idea. Furthermore they will be wiling to add and help grow an opposing argument if they can see the valid points for it.

What are we to do with equally credible citations that say opposing things?

I have lots of design issues noted in the wiki but it needs a bit of a cleanup. I will give a brief answer here instead of linking you to that mess! ;) If two ideas are expressed that contradict each other, a community member should link them with a 'contradiction' tag and they both become 'false'. This draws attention to the issue and promotes further inquiry - another benefit of WL. If its key to an argument and there is no other experiments then it shows what we need to fund to get our answers. If future studies result in continued contradiction we need to go the next level down and argue about the nature of the experiment and why x is better than y. If there is no disagreement about the methodology but still the results contradict, perhaps the phenomenon is not will enough understood yet and we are right to keep them false to prevent its use in backing other statements.

'Every argument ever made' is a huge goal.

Perhaps im exaggerating slightly... but only slightly! I think a connected knowledge base is important and i dream of a future where coming up with a new idea and adding it to the human knowledge pool is as natural as breathing. But as there are probably an infinite number of arguments to be made and mankind is so very finite, i have recognized my design must handle the inevitable gaps. Its easy to see how if WL becomes popular then gets made mandatory for transparent democracies, fair legal systems and reputable academies among many other areas, it will be easy to keep up to date. But the challenge, as you point out, will be in getting it that far!

Are you sure you've got something close to the right kind of format to deal with that? How many such formats have you tried? Why are you thinking of using this one over those?

Not 100% sure what you mean - can you suggest an example of an alternate format to clarify?

Has this resulted in your beliefs actually changing at any point? Has this actually improved the quality of arguments?

As it does not exist i cannot say, but thinking rationally and trying to map and scrutinize ideas like WL will, has changed me massively. When i was first exposed to critical thinking i struggled to update my 'high level' ideas to reflect massive changes in my basic beliefs. I was also keen to revisit all my past assumptions and re-examine their foundations. Attempting to solve these issues was what made me first conceive of a tool like WL. So WL is the solution i have come up with to all the problems with critical thinking in today world as i understand them. You mention changing minds a couple of times - Although this is of course highly desirable, i want to narrow my scope to making ideas available. I am sure this will result in other perks but it wont be my focus yet.

Have you tried testing them with totally random people off of the street versus nerds versus academics?

No, good idea! I am still playing with the 'rules', which has been my main procrastination excuse so far but i will need to do this. I have a Github page with a very basic web demo that should be ready soon too.

it needs to be thoroughly tested for effectiveness before it is actually implemented

Absolutely agree and the first experiment is to see what people with relevant areas of expertise think on the idea, so thank you for participating!

P.S I want to address some more of your points but this has taken me awhile to write, so i will leave that for a second comment another day.

Comment author: Germaine 06 May 2016 02:23:06PM *  1 point [-]

Hi from San Diego, California. I'm an attorney with academic training in molecular biology (BS, MS, PhD). I have an intense interest in politics, specifically the cognitive biology/social science of politics. I'm currently reading The Rationalizing Voter by Lodge and Taber. I have read both of Tetlock's books, Haidt's Righteous Mind, Khaneman's Thinking, Fast and Slow, Thaler's Nudge, Achen and Bartels Democracy for Realists and a few others. I also took a college-level MOOC on cognitive biology and attendant analytic techniques (fMRI, etc) and one on the biology of decision making in economics.

Based on what I have taught myself over the last 6-7 years, I came up with a new "objective" political ideology or set of morals that I thought could be used to at least modestly displace or supplement standard "subjective" ideologies including liberalism, conservatism, capitalism, socialism, Christianity, anarchy, racism, nationalism and so on. The point of this was an attempt to build an intellectual framework that could help to at least partially rationalize politics, which I see as mostly incoherent/irrational from my "objective" public-interest oriented point of view.

I have tried to explain myself to both lay audiences (I'm currently a moderator at Harlen's Place, a politics site on Disqus https://disqus.com/home/channel/harlansplace/ ), but have failed. I confess that I'm becoming discouraged at the possibility of applying cognitive and social science to even slightly rationalize politics. What both Haidt and Lodge/Tabor have to say, makes me think that what I am trying is futile. I have tried contact about 50-60 academics, including Tetlock, Haidt, Bartels and Taber, but none have responded with any substance (one got very annoyed and chewed me out for wasting his time; http://www.overcomingbias.com/ ) - most don't respond at all. I get that -- everyone is busy and crackpots with new ideas are a dime a thousand.

Anyway, I stumbled across this site this morning while looking for some online content about the affect heuristic. I thought I would introduce myself and try to fit in, if I'm up to the standards here. My interest is in trying to open a dialog with one or more people who know this science better than myself so that I can get some feedback one whether what I am trying to do is a waste of time. As a novice, I suspect that I misunderstand the science and overestimate the limits of human rationality in politics in a society that lives under the US constitution (free speech).

My blog is here: http://dispol.blogspot.com/

Comment author: ChristianKl 06 May 2016 03:28:21PM 2 points [-]

First impressions from skim reading the blog:

Objective politics, defined as unbiased fact and reason in service to the public interest is described and defended. Biology-based objectivity, the last political frontier.

That points for me into the direction of objectivism with all it's problems. There are good reasons to be quite suspicious when someone claims that they don't have an ideology and there views are simply "objective".

What we need to do as a country is obvious.

To me saying something like that without bringing forward a specific proposal suggests to me politcal ignorance.

Book reivew: Democracy for Realists

The blog isn't spell-checked.

Comment author: Lumifer 06 May 2016 02:40:12PM 2 points [-]

Is a short summary of your ideology or set of morals available somewhere on the 'net?

Comment author: Germaine 08 May 2016 04:08:04PM *  1 point [-]

I have tried for short summaries, but it hasn't worked. Very short summary: A "rational" ideology can be based on three morals (or core ideological principles): (1) fidelity to "unbiased" facts and (2) "unbiased" logic (or maybe "common sense" is the better term), both of which are focused on (3) service to an "objectively" defined conception of the public interest.

Maybe the best online attempts to explain this are these two items:

  1. an article I wrote for IVN: http://ivn.us/2015/08/21/opinion-america-needs-move-past-flawed-two-party-ideology/

  2. my blog post that tries to explain what an "objective" public interest definition can be and why it is important to be broad, i.e., so as to not impose fact- and logic-distorting ideological limits on how people see issues in politics: http://dispol.blogspot.com/2015/12/serving-public-interest.html

I confess, I am struggling to articulate the concepts, at least to a lay audience and maybe to everyone. That's why I was really jazzed to come across Less Wrong -- maybe some folks here will understand what I am trying to convey. I was under the impression that I was alone in my brand of politics and thinking.

Comment author: Lumifer 09 May 2016 12:56:49AM *  1 point [-]

(1) fidelity to "unbiased" facts and (2) "unbiased" logic (or maybe "common sense" is the better term)

These are not particularly contentious, given how they both can rephrased as "let's be really honest". However...

service to an "objectively" defined conception of the public interest

is somewhat more problematic. I assume we are speaking normatively, not descriptively, by the way, since real politics is nothing like that.

Off the top of my head, there are two big issues here. One is the notion of the "public interest" and how do you deal with aggregating the very diverse desires of the public into a single "public interest" and how do you resolve conflicts between incompatible desires.

The other one is what makes it "objective", even with the quotes. People have preferences (or values), some of them are pretty universal (e.g. the biologically hardwired ones), but some are not. Are you saying that some values should be uplifted into the "objective" realm, while others should be cast down into the "deviant" pit? Are there "right" values and "wrong" values?

Comment author: Gram_Stone 08 May 2016 11:06:34PM 1 point [-]

I read your article on IVN, so this is mostly a response to that.

I do think that it would be great if people thought about politics in a scientifico-rational way. And it isn't great that you really only have two options in the United States if you want to join a coalition that will actually have some effect. It's true that having two sets of positions that cannot be mismatched without signaling disloyalty results in a false-dichotomous sort of thinking. But it seems important to think about why things are in this state in the first place. Political parties can't be all bad, they must serve some function.

Think about labor unions and business leaders. Employees have some recourse if they dislike their boss. They can demand better conditions or pay, and they can also quit and go to another company. But we know that when employees do this, it usually doesn't work. They usually get fired and replaced instead. The reason is that if an employer loses one employee out of one hundred, then they will be operating at 99% productivity, while the employee that quit will be operating at 0% productivity for some time. Labor unions solve the coordination problem.

Likewise, the use of a political party is that it offers bargaining power. Any scientifico-rational political platform will have to solve such a coordination problem, and they will have to use a different solution from the historical one: ideology. That's not easy. Which is not to say that it's not worth trying.

So, it's not enough that citizens be able to reveal their demand for goods and services from the government, or other centers of power; it's also necessary that officials have incentives to provide the quality and quantity of goods and services demanded. In democracy this is obtained through the voting mechanism, among other things. A politician will have a strong incentive to commit an action that obtains many votes, but barely any incentive to commit an action that will obtain few votes, even if they have detailed information about what policies would result in the greatest increase in the public interest in the long run, and even if the action that obtains the most votes is not the policy that maximizes public interest in the long run. They would not be threatened by the loss of a few rational votes, or swayed by the gain of a few rational votes, any more than the boss would be threatened by the loss of one employee.

It seems difficult to me to fix something like this from the inside. I think a competitive, external government would be an easier solution. Seasteading is an example of an idea along these lines. I don't believe that private and public institutions are awfully different in their functions, we often see organizations on each side of the boundary performing similar functions at different times, even if some are more likely to be delegated to one than the other, and it seems to me that among national governments there is a deplorable lack of competition. In the market, the price mechanism provides both a way for consumers to reveal their demand, and a way to incentivize suppliers to supply the quality and quantity of goods and services demanded. If a firm is inefficient, then it goes out of business. However, public institutions are different, in that there often is no price mechanism in the traditional sense. If your government sucks, you mostly cannot choose to pay taxes to a different one. Exit costs are very high as a citizen of most countries. And the existing international community has monopolized the process of state foundation. You need territory to be sovereign, but all territory has been claimed by pre-existing states, except for Marie Byrd Land in Antarctica, which the U.S. and Russia reserve the right to make a claim to, and the condominium in Antarctica does not permit sovereignty way down there a la the Antarctic Treaty System. The only other option is the high sea. Scott Alexander's Archipelago and Atomic Communitarianism is related to this.

I wonder if you've thought about stuff like that. I don't think that our poor political situation is only a matter of individuals having bad epistemology.

Comment author: JohnC2015 24 March 2016 07:12:24AM 1 point [-]

Hi Less Wrong,

I am John Chavez from the Philippines. I'm a part-time teacher in a community college, teaching computer hardware servicing and maintenance to out-of-school youths.

As I take much value on helping others in my community and reach out to people who needs help, I came to know about Intentional Insights on Facebook, which leads me here in Less Wrong. I have been here for a while reading several published articles. There are a lot of articles here that I really love to read, although I must admit that there are a few that I found confusing and I disagree of.

Hence, I am introducing myself to you to formally start my quest in learning more about being rationalist.

I hope this will be enough for you to welcome me in your community. It will be humbling to know your thoughts :)

Thanks!

Comment author: JohnC2015 24 March 2016 03:22:52AM 1 point [-]

Hi Less Wrong,

I am John Chavez from the Philippines. I'm a part-time teacher in a community college, teaching computer hardware servicing and maintenance to out-of-school youths.

As I take much value on helping others in my community and reach out to people who needs help, I came to know about Intentional Insights on Facebook, which leads me here in Less Wrong. I have been here for a while reading several published articles. There are a lot of articles here that I really love to read, although I must admit that there are a few that I found confusing and I disagree of.

Hence, I am introducing myself to you to formally start my quest in learning more about being rationalist.

I hope this will be enough for you to welcome me in your community. It will be humbling to know your thoughts :)

Thanks!

Comment author: SquirrelInHell 24 March 2016 07:38:34AM *  2 points [-]

Hi John! From what you have described, I think it could be a better experience for you if you start with the more structured reading, which is (at the moment) best provided by Eliezer's "From AI To Zombies". You can download it for free if you follow the link. It may seem long, but it's well worth the read.

Comment author: JohnC2015 24 March 2016 10:16:35AM 1 point [-]

Cool! Thank you. I will definitely read it. :)

Comment author: BertM 06 December 2015 01:44:48PM *  1 point [-]

Retracted

Comment author: [deleted] 06 December 2015 03:01:54PM 2 points [-]

Please don't regret it! Welcome.

Comment author: rodomonte 02 December 2016 05:05:00PM 0 points [-]

Hello, I find this since wei dai use it, but I find this a copy of reddit, hackernews etc, honestly I just want to make a proposal here since I stop believing in human intelligence from a long time, I only believe in a sort of "social physics" that constantly build new facts and organisations, anyway.

I will not give you any single label on myself, sorry: but I'm sure good money will arouse from a good social network system and since actual ones lack any intelligence (.org XD) maybe you could do the case with some changes, I value this at more than 30% probability and so I'm doing this little text now. Hope from this group of minds first non idiotic money could arise.

Comment author: [deleted] 05 October 2016 05:44:14PM *  0 points [-]

Hello everyone,

I'm a PhD student in social psychology focusing my time mainly on applied statistics and quantitative methods for the study of brain and behavior. My research focuses on the way that people's goals influence the way they reason and form judgments, but I've also dabbled a bit in self-regulation/self-control.

Perhaps my attraction to this community is based on the fact that I feel that my field is an unfriendly environment for the free exploration of novel or uncommon ideas. Specifically, I suspect that many of the models of human decision-making being put forth by our field over-estimate the tendency for biases/heuristics to lead to errors or poor judgments. For example, few (if any) of my colleagues are aware that our stereotypes of other groups tend to be highly accurate and this effect is one of the largest effects in all of social psychology. It appears that, in many cases, our biases tend to improve accuracy and decision-making quality. However, to utter phrases like "stereotype accuracy" around most social psychologists is to invite suspicion about one's underlying motives. I'm here not because I want to talk about stereotype accuracy in particular, but because I'd like to be able to consider such an idea without the threat of damaging my reputation and career.

I also like thinking about AI and how an (accurate) understanding of human reasoning in information-starved contexts could help us design AI responsibly, but that's just whipped cream.