A few notes about the site mechanics
To post your first comment, you must have carried out the e-mail confirmation: When you signed up to create your account, an e-mail was sent to the address you provided with a link that you need to follow to confirm your e-mail address. You must do this before you can post!
Less Wrong
comments are threaded for easy following of multiple conversations. To respond to any comment, click the "Reply" link at the bottom of that comment's box. Within the comment box, links and formatting are achieved via
Markdown syntax (you can click the "Help" link below the text box to bring up a primer).
You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it's part of what makes discussions on Less Wrong look different from those anywhere else on the Internet.
However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.) Take note when you're downvoted a lot on one topic, as it often means that several members of the community think you're missing an important point or making a mistake in reasoning— not just that they disagree with you! If you have any questions about karma or voting, please feel free to ask here.
Replies to your comments across the site, plus
private messages from other users, will show up in your
inbox. You can reach it via the little mail icon beneath your karma score on the upper right of most pages. When you have a new reply or message, it glows red. You can also click on any user's name to view all of their comments and posts.
Discussions on Less Wrong tend to end differently than in most other forums; a surprising number end when one participant changes their mind, or when multiple people clarify their views enough and reach agreement. More commonly, though, people will just stop when they've better identified their deeper disagreements, or simply "tap out" of a discussion that's stopped being productive. (Seriously, you can just write "I'm tapping out of this thread.") This is absolutely OK, and it's one good way to avoid the flamewars that plague many sites.
EXTRA FEATURES:
There's actually more than meets the eye here: look near the top of the page for the "WIKI", "DISCUSSION" and "SEQUENCES" links.
LW WIKI: This is our attempt to make searching by topic feasible, as well as to store information like
common abbreviations and idioms. It's a good place to look if someone's speaking Greek to you.
LW DISCUSSION: This is a forum just like the top-level one, with two key differences: in the top-level forum, posts require the author to have 20 karma in order to publish, and any upvotes or downvotes on the post are multiplied by 10. Thus there's a lot more informal dialogue in the Discussion section, including some of the more fun conversations here.
SEQUENCES: A huge corpus of material mostly written by Eliezer Yudkowsky in his days of blogging at Overcoming Bias, before Less Wrong was started. Much of the discussion here will casually depend on or refer to ideas brought up in those posts, so reading them can really help with present discussions. Besides which, they're pretty engrossing in my opinion.
A few notes about the community
If you've come to Less Wrong to discuss a particular topic, this thread would be a great place to start the conversation. By commenting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.
If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation; we can fit many more welcomes onto each thread if fewer of them sprout 400+ comments. (To do this: click "Create new article" in the upper right corner next to your username, then write the article, then at the bottom take the menu "Post to" and change it from "Drafts" to "Less Wrong Discussion". Then click "Submit". When you edit a published post, clicking "Save and continue" does correctly update the post.)
If you want to write a post about a LW-relevant topic, awesome! I highly recommend you submit your first post to Less Wrong Discussion; don't worry, you can later promote it from there to the main page if it's well-received. (It's much better to get some feedback before every vote counts for 10 karma—honestly, you don't know what you don't know about the community norms here.)
Alternatively, if you're still unsure where to submit a post, whether to submit it at all, would like some feedback before submitting, or want to gauge interest, you can ask / provide your draft / summarize your submission in the latest
open comment
thread. In fact, Open Threads are intended for
anything 'worth saying, but not worth its own post', so please do dive in! Informally, there is also the unofficial
Less Wrong IRC chat room, and you might also like to take a look at some of the other regular
special threads; they're a great way to get involved with the community!
If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter
A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.
A list of some posts that are pretty awesome
I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:
More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site!
Once a post gets over 500 comments, the site stops showing them all by default. If this post has 500 comments and you have 20 karma, please do start the next welcome post; a new post is a good perennial way to encourage newcomers and lurkers to introduce themselves. (Step-by-step, foolproof instructions here; takes <180seconds.)
If there's anything I should add or update on this post (especially broken links), please send me a private message—I may not notice a comment on the post.
Finally, a big thank you to everyone that helped write this post via its predecessors!
Comments (635)
Hi,
I am Falk. I am a PhD student in the computational cognitive science lab at UC Berkeley. I develop and test computational models of bounded rationality in decision making and reasoning. I am particularly interested in how we can learn to be more rational. To answer this question I am developing a computational theory of cognitive plasticity. I am also very interested in self-improvement, and I am hoping to develop strategies, tools and interventions that will help us become more rational.
I have written a blog post on what we can do to accelerate our cognitive growth that I would like to share with the LessWrong community, but it seems that I am not allowed to post it yet.
I look forward to reading your post.
Hi all, I'm new. I've been browsing the forum for two weeks and only now I've come across this welcome thread, so nice to meet you! I'm quite interested in the control problem, mainly because it seems like a very critical thing to get right. My background is a PhD in structural engineering and developing my own HFT algorithms (which for the past few years has been my source of both revenue and spare time). So I'm completely new to all of the topics on the forum, but I'm loving the challenge. At the moment I don't have any karma points so I can't publish, which is probably a good thing given my ignorance, so may I post some doubts and questions here in hope to be pointed in the right direction? Thanks in advance!
Do your algorithms require co-location and are sensitive to latency?
Hi Lumifer. Yes, to some extent. At the moment I don't have co-location so I minimized latency as much as possible in other ways and have to stick to the slower, less efficient markets. I'd like to eventually test them on larger markets but I know that without co-location (and maybe a good deal of extra smarts) I stand no chance.
Hello and welcome! Don't be shy about posting; if you're a PhD making money with HFT, I think you are plenty qualified, and external perspectives can be very valuable. Posting in an open thread doesn't require any karma and will get you a much bigger audience than this welcome thread. (For maximum visibility you can post right after a thread's creation.)
Hi John, thanks for the encouragement. One thing that strikes me of this community is how most people make an effort to consider each other's point of view, it's a real indicator of a high level of reasonableness and intellectual honesty. I hope I can practice this too. Thanks for pointing me to the open threads, they are perfect for what I had in mind.
I am have been a Less Wrong user with an anonymous account since the Overcoming Bias days. I decided to create this new account using my real name.
Hello.
I'm currently attempting to read through the MIRI research guide in order to contribute to one of the open problems. Starting from Basics. I'm emulating many of Nate's techniques. I'll post reviews of material in the research guide at lesswrong as I work through it.
I'm mostly posting here now just to note this. I can be terse at times.
See you there.
Hello!
I’ve lived in Berkeley for about six years. My girlfriend is going to medical school so we’re going to be moving to Boca Raton, Florida (most likely) or Columbus, Ohio in less than a month. I’m sad to be leaving the Bay Area but thrilled to be with my girlfriend when she starts such an exciting chapter of her life. I’m also very fortunate that I can handle nearly all my business online.
I co-founded a startup devoted to making a web game with an old buddy of mine. This same guy introduced me to LW.
Critical thinking and debate has been a focus of mine since I was quite young so LW fit right into my interests. I’m very interested in instrumental/practical applications of rationality. I’ve been lurking for many years and finally decided to make an account to get over my fear of online embarrassment given my unfamiliarity with a lot of the lexicon and protocol on LW.
Some passions of mine are movies, seeking out novel experiences (examples are shooting an AK-47, judging a singing competition, and visiting Pixar), and martial arts.
I’m also interested in effective altruism and AI research but still have a lot of learning to do, especially in the latter.
Welcome!
You may want to check out some of AnnaSalamon's old posts for some things to try as far as applied rationality goes, if you haven't already.
Have you been / are you interested in connecting with the Bay Area Rationalist or EA community while you're still here?
Thanks for the tip! I've read some of her posts but will look into the ones I've haven't.
We're going to be moving in about two weeks and are fairly busy before so probably not going to be able to. I regret not going to a Berkeley Meetup while I had more time.
Hello all =)
I am reading LW more that one year. I organized book club meetups about HPMOR in Kyiv, Ukraine in past (https://vk.com/hpmor_meeting and https://vk.com/efficient_reading5)
Now i start to organization process of first general LW meetup in Kyiv, our google group: http://groups.google.com/d/forum/LessWrong-Kyiv
On the first meet we will discuss Daniel Kahneman`s "Thinking, Fast and Slow" book in addition to what we will do in the future =)
Please, if you can - give any useful suggestions about what and how first meetup must be done (i have read LW pdf file about how to organize meetups).
Awesome! Note that you can advertise your meetup further using the LW meetup system.
Thanks,
http://lesswrong.com/meetups/1fd
Done =)
Hello there! I mean, here and there, too! I will do my best to come, although I have not read the book. Good luck!
A Challenger Has Arrived! Hello, yes, I'd like to announce that I am successfully existing for the first time in forever. I've been a lurker for quite some time, and have finished Eliezer's book. As I've stepped up my studies and plan to continue doing so, I've decided that scouting for a party to join would be wise.
Right now I'm finalizing my grasp of Rationality: From A.I. to Zombies, and organizing some notes I have on my personal struggle with willpower depletion. I would really appreciate if anyone knows of any site-external sources I could devour, in service to these goals.
From this basic grasp of rationality technique I will be departing to MIRI's research guide, so if you're currently on a quest to join the best, I certainly could use some companions in case I stumble.
Thanks, PhoenixComplex7
Welcome! Re: willpower stuff, I found this guy's writing very helpful several years ago. You can get his free book by putting your email in at the bottom of this page. (More specifics on the willpower issues you are facing might allow me to give more targeted advice.)
Hey! <retracted because I changed my mind about the sensibleness of putting personal info on the internet and more people started recognising my name than I'm happy with>
You seem legit. Also, wait, the #lesswrong IRC channel stopped being dead?
--
Hi Act, welcome!
I will gladly converse with you in Russian if you want to.
Why do you want a united utopia? Don't you think different people prefer different things? Even if assume the ultimate utopia is unform, wouldn't we want to experiment with different things to get there?
Would you feel "dwarfed by an FAI" if you had little direct knowledge of what the FAI is up to? Imagine a relatively omniscient and omnipotent god taking care of things on some (mostly invisible) level but doesn't ever come down to solve your homework.
--
P.S.
I am dismayed that you were ambushed by the far right crowd, especially on the welcome thread.
My impression is that you are highly intelligent, very decent and admirably enthusiastic. I think you are a perfect example of the values that I love in this community and I very much want you on board. I'm sure that I personally would enjoy interacting with you.
Also, I am confident you will go far in life. Good dragon hunting!
--
So pointing out flaws in someone's position is now "ambushing" them?
Disagreeing is ok. Disagreeing is often productive. Framing your disagreement as a personal attack is not ok. Lets treat each other with respect.
I wouldn't call it an ambush, but in any case Acty emerged from that donnybrook in quite a good shape :-)
I sympathize with your sentiment regarding friendship, community etc. The thing is, when everyone are friends the state is not needed at all. The state is a way of using violence or the threat of violence to resolve conflicts between people in a way which is as good as possible for all parties (in the case of egalitarian states; other states resolve conflicts in favor of the ruling class). Forcing people to obey any given system of law is already an act of coercion. Why magnify this coercion by forcing everyone to obey the same system rather than allowing any sufficiently big group of people choose their own system?
Moreover, in the search of utopia we can go down many paths. In the spirit of the empirical method, it seems reasonable to allow people to explore different paths if we are to find the best one.
I used "homework" as a figure of speech :)
This might be so. However, you must consider the tradeoff between this sadness and efficiency of dragon-slaying.
The problem is, if you instantly go from human intelligence to far superhuman, it looks like a breach in the continuity of your identity. And such a breach might be paramount to death. After all, what makes tomorrow you the same person as today you, if not the continuity between them? I agree with Eliezer that I want to be upgraded over time, but I want it to happen slowly and gradually.
I do think that some kind of organisational cooperative structure would be needed even if everyone were friends - provided there are dragons left to slay. If people need to work together on dragonfighting, then just being friends won't cut it - there will need to be some kind of team, and some people delegating different tasks to team members and coordinating efforts. Of course, if there aren't dragons to slay, then there's no need for us to work together and people can do whatever they like.
And yeah - the tradeoff would definitely need to be considered. If the AI told me, "Sorry, but I need to solve negentropy and if you try and help me you're just going to slow me down to the point at which it becomes more likely that everyone dies", I guess I would just have to deal with it. Making it more likely that everyone dies in the slow heat death of the universe is a terribly large price to pay for indulging my desire to fight things. It could be a tradeoff worth making, though, if it turns out that a significant number of people are aimless and unhappy unless they have a cause to fight for - we can explore the galaxy and fight negentropy and this will allow people like me to continue being motivated and fulfilled by our burning desire to fix things. It depends on whether people like me, with aforementioned burning desire, are a minority or a large majority. If a large majority of the human race feels listless and sad unless they have a quest to do, then it may be worthwhile letting us help even if it impedes the effort slightly.
And yeah - I'm not sure that just giving me more processor power and memory without changing my code counts as death, but simultaneously giving a human more processor power and more memory and not increasing their rationality sounds... silly and maybe not safe, so I guess it'll have to be a gradual upgrade process in all of us. I quite like that idea though - it's like having a second childhood, except this time you're learning to remember every book in the library and fly with your jetpack-including robot feet, instead of just learning to walk and talk. I am totally up for that.
We don't need the state to organize. Look at all the private organizations out there.
The cause might be something created artificially by the FAI. One idea I had is a universe with "pseudodeath" which doesn't literally kill you but relocates you to another part of the universe which results in lose of connections with all people you knew. Like in Border Guards but involuntary, so that human communities have to fight with "nature" to survive.
Sort of a cosmic witness relocation program! :).
The following is pure speculation. But I imagine an FAI would begin its work by vastly reducing the chance of death, and then raising everyone's intelligence and energy levels to those of JohnvonNeumann. That might allow us to bootstrap ourselves to superhuman levels with minimal guidance.
I think it's a bit of a shame that society seems to funnel our most intelligent, logical people away from social science. I think social science is frequently much more helpful for society than, say, string theory research.
Note: I do find it plausible that doing STEM in undergrad is a good way to train oneself to think, and the best combo might be a STEM undergrad and a social science grad degree. You could do your undergrad in statistics, since statistics is key to social science, and try to become the next Andrew Gelman.
As advice for others like me, this is good. For me personally it doesn't work too well; my A level subjects mean that I won't be able to take a STEM subject at a good university. I can't do statistics, because I dropped maths last year. The only STEM A level I'm taking is CompSci, and good universities require maths for CompSci degrees. I could probably get into a good degree course for Linguistics, but it isn't a passionate adoration for linguistics that gets me up in the mornings. I adore human and social sciences.
I don't plan to be completely devoid of STEM education; the subject I actually want to take is quite hard-science-ish for a social science. If I get in, I want to do biological anthropology and archaeology papers, which involve digging up skeletons and chemically analysing them and looking at primate behaviour and early stone tools. It would be pretty cool to do some kind of PhD involving human evolution. From what I've seen, if I get onto the course I want to get onto, it'll teach me a lot of biology and evolutionary psychology and maybe some biochemistry and linguistics.
While archaeology certainly seems fun, do you think it will help you understand how to build a better world?
--
No. The problem of building a state out of 10,000 people who's fasted way of transport is the horse and who have no math is remarkably different from the problem of building a state of tens of millions of people in the age of the internet, cellphones fast airplanes and cars that allow people to travel fast.
The Ancient Egyptians didn't have the math to even think about running a randomized trial to find out whether a certain policy will work. Studying them doesn't tell you anything about how to get our current political system to be more open to make policy based on scientific research.
I think cognitive psychologists who actually did well controlled experiments were a lot more useful for learning about biases and fallacies than evolutionary psychology.
Most people in political science don't do it well. I don't know of a single student body that changed to a new political system in the last decade.
I did study at the Free University of Berlin which has a very interesting political structure that came out of 68's. At the time there was a rejection of representative democracy and thus even through the government of Berlin wants the student bodies of universities in Berlin to be organised according to representative democracy, out university effectively isn't. Politics students thought really hard around 68 about how to create a more soviet style democracy and the system is still in operation today.
Compared to designing a system like that today's politics students are slacking. The aren't practically oriented.
If you are interested in rationality problems, there the field of decision science. It's likely more yielding then anthropology. Having a good grasp of academic decision science would be helpful when it comes to designing political systems and likely not enough people in political science deal with that subject.
Are you aware that the American Anthropological Association dropped science from their long-range plan 5 years ago?
The two are not mutually exclusive.
--
The problem of studying people in the first villages is not only that their problems don't map directly to today. It's also that it's get's really hard to get concrete data. It's much easier to do good science when you have good reliable data.
With 10,000 people you can solve a lot via tribal bonds and clans. Families stick together. You can also do a bit of religion and everyone follows the wise local priest. Those solutions don't scale well.
You are likely becoming like the people that surround you when you go into university. You also build relationships with them.
Going to Cambridge is good. Cambridge draws a lot of intelligent people together and also provides you with very useful contacts for a political career. On the other hand that means that you have to go to those place in Cambridge where the relevant people are. Find out which professors at Cambridge actually do good social science. Then go to their classes.
Just make sure that you don't get lost and go on a career of digging up old stuff and not affecting the real world. A lot of smart people get lost in programs like that. It's like smart people who get lost in theoretical physics.
Is that the system where everyone can vote, but there's only one candidate?
No, that's not the meaning of the word soviet. Soviet translates into something like "counsel" in English.
Reducing elections to a single candidate also wouldn't fly legally. You can't just forbid people from being a candidate without producing a legal attack surface.
As I said, it's actually a complex political system that need smart people to set up.
It's like British Democracy also happens to "democracy" where there a queen and the prime minister went to Eton and Oxford and wants to introduce barrier on free communication that are is some way more totalitarian than what the Chinese government dares to do.
Democracy always get's complicated if it comes to the details ;).
In English, "Soviet" is the adjectival form of "USSR".
Never mind the word. What is the actual structure at the Free University of Berlin that you're referring to? And in 1968, did they believe that this was how things were done in the USSR?
If you consider finance a subset of social science then the U.S. puts a lot of its best and brightest there.
Hedge funds do manage to employ the best and brightest, on the other hand I'm not sure whether the same is true for the academic subject of finance.
Finance is not social science. I think it's more similar to engineering: you need to have a grasp of the underlying concepts and be able to do the math, but the real world will screw you up on a very regular basis and so you need to be able to deal with that.
Behavioral finance is supposedly a big thing.
Taking psychology into consideration doesn't make finance a social science any more than sociological factors make civil engineering a social science.
I agree wholeheartedly. A field like theoretical physics is much more glamorous to large number of intelligent people. I think it's partly signaling, but I'm not sure that explains everything.
What makes the least sense to me are people who seem to believe (or even explicitly confirm!) that they are only interested in things which have no applications. Especially when these people seem to disparage others who work in applied fields. I imagine this teasing might explain a bit of why so many smart people work in less helpful fields.
I think to an extent, physics is more intellectually satisfying to a lot of smart people. It's much easier to prove things for definite in maths and physics. You can take a test and get right answers, and be sure of your right answers, so when you're sufficiently smart it feels like a lot of fun to go around proving things and being sure of yourself. It feels much less satisfying to debate about which economics theories might be better.
Knowing proven facts about high level physics makes you feel like an initiate into the inner circles of secret powerful knowledge, knowing a bunch about different theories of politics (especially at first) just makes you feel confused. So if you're really smart, 'hard' sciences can feel more fun. I know I certainly enjoy learning computer science and feeling the rush of vague superiority when I fix someone's computer for them (and the rush of triumph when my code finally compiles). When I attempt to fix people's sociological opinions for them, there's no rush of vague superiority, just a feeling of intense frustration and a deeply felt desire to bang my head against the wall.
Then there's the Ancient Greek cultural thing where sitting around thinking very hard is obviously superior to going out and doing things - cool people sit inside their mansions and think, leaving your house and mucking around in the real world actually doing things is for peasants - which has somehow survived to this day. The real world is dirty and messy and contains annoying things that mess up your beautiful neat theories. Making a beautiful theory of how mechanics works is very satisfying. Trying to actually use the theory to build a bridge when you have budget constraints and a really big river is frustrating. Trying to apply our built up knowledge about small things (molecules) to bigger things (cells) to even bigger things (brains) to REALLY BIG AND COMPLICATED things (lots and lots of brains together, eg a society) is really intensely frustrating. And the intense frustration and higher difficulty (more difficult to do it right, anyway) means there's more failure and less conclusive results / slower progress, which leads some people to write off social science as a whole. The rewarding rush of success when your beautifully engineered bridge looks shiny and finished is not something you really get in the social sciences, because it will be a very long time before someone feels the rewarding rush of success that their beautiful preference-satisfying society is shiny and perfect.
I do think that the natural sciences are hopelessly lost without the social sciences, but for most super-clever people, is studying natural science more fun than doing social science? Definitely - I mean, while the politics students are busy reading books and banging their heads against walls and yelling at each other, physics students are putting liquid nitrogen in barrels of ping pong balls so that the whole thing explodes! (I loved chemistry in secondary school for years, right up until I finally caught on that coloured flames were the closest we were going to get to scorching our eyebrows off. Something about health and safety, thirteen year olds, and fire. I wish I hadn't stopped loving chemistry, because I hear once you're at university they do actually let you set things on fire sometimes.)
I don't think that something being (more) mathematically rigorous explains all of what we see. Physicists at one time used to study fluid dynamics. Rayleigh, Kelvin, Stokes, Heisenberg, etc., all have published in the field. You can do quite a lot mathematically in fluids, and I have felt like part of some inner circle because of what I know about fluid dynamics.
Now the field has been basically displaced by quantum mechanics, and it's usually not considered part of "physics" in some sense, and is less popular than I think you might expect if a subject being amenable to mathematical treatment is attractive to some folks. Physicists are generally taught only the most basic concepts in the field. My impression is that the majority of physics undergrads couldn't identify the Navier-Stokes equations, which are the most basic equations for the movement of a fluid.
It could also be that fluids have obvious practical applications (aerodynamics, energy, etc.) and this makes the subject distasteful to pedants. That's just speculation, however. I'm really not sure why fields like physics, etc., are so attractive to some people, though I think you've identified parts of it.
You do make a good point about the sense of completion being different in engineering vs. social science. I suppose the closest you could get in social science is developing some successful self-help book or changing public policy in a good way, but I think these are much harder than building things.
I think there's also definitely a prestige/coolness factor which isn't correlated with difficulty, applicability, or usefulness of the field.
Quantum mechanics is esoteric and alien and weird and COOL and saying you understand it whilst sliding your glasses down your nose makes you into Supergeek. Saying "I understand how wet stuff splashes" is not really so... high status. It's the same thing that makes astrophysics higher status than microbiology even though the latter is probably more useful and saves more lives / helps more people - rockets spew fire and go to the moon, bacteria cells in a petri dish are just kind of icky and slimy. I am quite certain that, if you are smart enough to go for any field you want, there is a definite motivation / social pressure to select a "cool" subject involving rockets and quarks and lasers, rather than a less cool subject involving water and cells or... god forbid... political arguments.
And, hmm, actually, not quite true on the last point - a social scientist could develop an intervention program, like a youth education program, that decreases crime or increases youth achievement/engagement, and it would probably feel awesome and warm and fuzzy to talk to the youths whose lives were improved by it. So you could certainly get closer than "developing some successful self-help book". It is certainly harder, though, I think, and there's certainly a higher rate of failure for crime-preventing youth education programs than for modern bridge-building efforts.
To be honest, I found QM to be the least interesting subject of all physics which I've learned about.
Also, I don't think the features you highlighted work either. Fluid dynamics has loads of counterintuitive findings, perhaps even more so than QM, e.g., streamlining can increase drag at low Reynolds numbers, increasing speed can decrease drag in certain situations ("drag crisis"). Fluids also has plenty of esoteric concepts; very few people reading the previous sentence likely know what the Reynolds number or drag crisis is.
Physicists, even astrophysicists, know little more about how rockets work than educated laymen. Rocketry is part of aerospace engineering, of which the foundation is fluid dynamics. Maybe rocketry is a counterexample, but I don't really think so, as there are a lot more people who think rockets are interesting than who know what a de Laval nozzle is. Even that has some counterintuitive effects; the fluid accelerates in the expansion!
You make me suddenly, intensely curious to find out what a Reynolds number is and why it can make streamlining increase drag. I am also abruptly realising that I know less than I thought about STEM fields, given I just kind of assumed that astrophysicists were the official People Who Know About Space and therefore rocketry must be part of their domain. I don't know whether I want to ask if you can recommend any good fluid dynamics introductions, or whether I don't want to add to the several feet high pile of books next to my bed...
Okay - so why do you think quantum mechanics became more "cool" than fluid dynamics? Was there a time when fluid dynamics held the equivalent prestige and mystery that quantum mechanics has today? It clearly seems to be more useful, and something that you could easily become curious about just from everyday events like carrying a cup of tea upstairs and pondering how near-impossible it is not to spill a few drops if you've overfilled it.
The best non-mathematical introduction I have seen is Shape and Flow: The Fluid Dynamics of Drag. This book is fairly short; it has 186 pages, but each page is small and there are many pictures. It explains some basic concepts of fluid dynamics like the Reynolds number, what controls drag at low and high Reynolds numbers, why golf balls (or roughened spheres in general) have less drag than smooth spheres at high Reynolds number (this does not imply that roughening always reduces drag; it does not on streamlined bodies as is explained in the book), how drag can decrease as you increase speed in certain cases, how wind tunnels and other similar scale modeling works, etc.
You could also watch this series of videos on drag. They were made by the same person who wrote Shape and Drag. There is also a related collection of videos on other topics in fluid dynamics.
Beyond that, the most popular undergraduate textbook by Munson is quite good. I'd suggest buying an old edition if you want to learn more; the newer editions do not add anything of value to an autodidact. I linked to the fifth edition, which is what I own.
I'll offer a few possibilities about why fluids is generally seen as less attractive than QM, but I want to be clear that I think these ideas are all very tentative.
This study suggests that in an artificial music market, the popularity charts are only weakly influenced by the quality of the music. (Note that I haven't read this beyond the abstract.) Social influence had a much stronger effect. One possible application of this idea to different fields is that QM became more attractive for social reasons, e.g., the Matthew effect is likely one reason.
The vast majority of the field of fluid mechanics is based on classical mechanics, i.e., F = m a is one of the fundamental equations used to derive the Navier-Stokes equations. Maybe because the field is largely based on classical effects, it's seen as less interesting. This could be particularly compelling for physicists, as novelty is often valued over everything else.
I've also previously mentioned that fluid dynamics is more useful than quantum mechanics, so people who believe useless things are better might find QM more interesting.
There also is the related issue that a wide variety of physical science is lumped into the category "physics" at the high school level, so someone with a particular interest might get the mistaken impression that physics covers everything. I majored in mechanical engineering in college, and basically did it because my father did. My interest even when I was a teenager was fluids, but I hadn't realized that physicists don't study the subject in any depth. I was lucky to have picked the right major. I suppose this is a social effect of the type mentioned above.
(Also, to be clear, I don't want to give the impression that more people do QM than fluids. I actually think the opposite is more likely to be true. I'm saying that QM is "cooler" than fluids.)
Fluid mechanics used to be "cooler" back in the late 1800s. Physicists like Rayleigh and Kelvin both made seminal contributions to the subject, but neither received their Nobel for fluids research. I recall reading that two very famous fluid dynamicists in the early 20th century, Prandtl and Taylor, were recommended for the prize in physics, but neither received it. These two made foundational contributions to physics in the broadest sense of the word. Taylor speculated the lack of Nobels for fluid mechanics was due to how the Nobel prize is rewarded. I also recall reading that there was indications that the committee found the mathematical approximations used to be distasteful even when they were very accurate. Unfortunately those approximations were necessary at the time, and even today we still use approximations, though they are different. Maybe the lack of Nobels contributes to fluids not being as "cool" today.
Ooh, yay, free knowledge and links! Thankyou, you're awesome!
The linked study was a fun read. I was originally a bit skeptical - it feels like songs are sufficiently subjective that you'll just like what your friends like or is 'cool', but what subjects you choose to study ought to be the topic of a little more research and numbers - but after further reflection the dynamics are probably the same, since often the reason you listen to a song at all is because your friend recommended it, and the reason you research a potential career in something is because your careers guidance counselor or your form tutor or someone told you to. And among people who've not encountered 80k hours or EA, career choice is often seen as a subjective thing. It'd be like with Asch's conformity experiments where participants aren't even aware that they're conforming because it's subconscious, except even worse because it's subconscious and seen as subjective...
That seems like a very plausible explanation. There could easily be a kind of self-reinforcing loop, as well, like, "I didn't learn fluid dynamics in school and there aren't any fluid dynamics Nobel prize winners, therefore fluid dynamics isn't very cool, therefore let's not award it any prizes or put it into the curriculum..."
At its heart, this is starting to seem like a sanity-waterline problem like almost everything else. Decrease the amount that people irrationally go for novelty and specific prizes and "application is for peasants" type stuff, and increase the amount they go for saner things like the actual interest level and usefulness of the field, and prestige will start being allocated to fields in a more sensible way. Fluid dynamics sounds really really interesting, by the way.
Also perhaps worth noting that the effect within the LW subculture in particular may have to do with lots of LW users knowing a lot about ideas or disciplines where there are a lot of popular but wrong positions so they know how not to go astray. Throughout the Sequences, before you figure out how to do it right, you hear about how a bunch of other people have done it wrong: MWI, p-zombies, value theory, evolutionary biology, intellectual subcultures, etc. I don't know that there are any sexy controversies in fluid mechanics.
"I am an old man now, and when I die and go to heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics, and the other is the turbulent motion of fluids. And about the former I am rather optimistic." (Horace Lamb)
(Indeed, today quantum electrodynamics makes correct predictions within one part per billion and fluid dynamics has an open million-dollar question.)
The bigger shame is the kind of BS that passes for humanities/social science these days.
Is that a fact? I've seen social scientists complain that social science is trying too hard to emulate the hard science.
Yes, most social science is cargo cult science. That's perfectly consistent with it being BS.
Look, it may very well be that social science is low-quality. But your comments in this thread are not at all up to LW standards. You need to cite evidence for your positions and stop calling people names.
I think there may be a self-reinforcing spiral where highly logical people aren't impressed by social science, leading them to avoid it, leading to social science being unimpressive to highly logical because it's done by people who aren't highly logical. But I could be wrong--maybe highly logical people are misperceiving.
It's not just a self-reinforcing spiral. There is also a driver, namely since social science has more political implications and there is a lot of political control over science funding, social science selects for people willing to reach the "correct" conclusions even if they have to torture logic and the evidence to do so.
Well that's a self-reinforcing spiral of a different type. In general, I see a number of forces pushing newcomers to a group towards being similar to whoever the folks already in the group are:
The Iron Law of Bureaucracy, insofar as it's accurate.
Self-segregation. It's less aversive to interact with people who agree with you and are similar to you, which nudges people towards forming social circles of similar others.
Reputation effects. If Google has a reputation for having great programmers, other great programmers will want to work there so they can have great coworkers.
This is why it took someone like Snowden to expose NSA spying. The NSA was the butt of jokes in the crypto community for probably doing illicit spying long before Snowden... which meant people who cared about civil liberties didn't apply for jobs there (who wants to work for the evil empire?) (Note: just my guess as someone outside crypto; could be totally wrong on this one.)
Edit: evaporative cooling should probably be considered related to the bullet points above.
You're assuming that "intelligent" == "logical". That just ain't so and especially ain't so in social sciences.
"The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function." -- F. Scott Fitzgerald
Is there data about the average IQ of PHD's or professors in the social sciences?
I did a bit of googling, and it really surprised me. I thought the social science IQs would be lower on average than the STEM IQs, but I found a lot of conflicting stuff. Most sources seem to put physics and maths at the top of the ranks, but then there's engineering, social science and biology and I keep seeing those three in different orders. If you split up 'social science' and 'humanities', then humanities stays at the top and social science drops a few places, presumably because law is a very attractive profession for smart people (high prestige and pay) and law is technically a humanity. I'm not very confident in any of my Google results, though - they all looked slightly dodgy - so I'm not linking to any and would love it if someone else could find some better data.
I don't think it's an argument for disregarding social science, even if we did find data that showed all social scientists are stupider than STEM scientists. I mean, education came last for IQ on almost all of the lists I looked up. Education. Nobody is going to say that this means we should scrap education. If education really does attract a lot of stupid people, I think that is cause to try and raise the prestige and pay of education as a profession so that more smart people do it - not to cut funding for schools. (Though the reason education is so lowly ranked for IQ could be that a lot of countries don't require teachers to have education degrees, you get a different degree and then a teaching certificate, so you only take Education as a bachelor's if you want to do Childhood Studies and go into social care/work.)
It's clearly very important that our governments are advised by smart social scientists who can do experiments and tell them whether law X or policy Y will decrease the crime rate or just annoy people, or we're just letting politicians do whatever their ideology tells them to do. So, even though the IQ of people in social sciences is lower on average than the IQ of people in physics, we shouldn't conclude that social science is worthless - I think we should conclude that efforts must be made to get more smart people to consider becoming social scientists.
I also don't think you necessarily need a high IQ to be a successful social scientist. Being a successful mathematician requires a lot of processing power. Being a successful social scientist requires a lot of rationality and a lot of carefulness. If you're trying to do some problems with areas of circles, then you will not be distracted by your religious belief that pi is an evil number and cannot be the answer, nor will you have to worry about the line your circle is drawn with being a sentient line and deliberately mucking up your results. Social scientists don't need as much processing power to throw at problems, but it takes a lot of care and ability to change one's mind to do good social science, because you're doing research on really complicated high-level things with sentient agents who do weird things and you were probably raised with an ideology about it. Without a good amount of rationality, you will just end up repeatedly "proving" whatever your ideology says.
To make physics worthwhile you need high IQ; without that, you'd produce awful physics. To make social science worthwhile, you need to be very very careful and ignore what your ideology is telling you in the back of your mind; without that, you produce awful social science. Unfortunately, our society's ability to test for IQ is much better than our society's ability to test for rationality, which could explain why more people get away with BS social science than they do with BS physics. (The other explanation is that there are both awful social science papers and awful physics papers, but awful physics papers get ignored by everyone, whereas awful social science papers are immediately picked up by whatever group whose ideology they support and linked to on facebook with accompanying comments in all-caps.)
That might actually have been a problem once. Apparently the Pythagoreans had serious problems with irrational numbers...
And current mathematician have them with infinitesimally small numbers ;)
Non-standard analysis is perfectly fine. Most mathematicians just don't deal with that kind of analysis.
I don't think modern mathematicians are going to drown someone for using infinitesimally small numbers...
Not really. Everyone agrees that calculus can be done with infinitesimals, but most mathematicians think that doing it with limits forms a better basis for going on to real analysis and epsilon-delta proofs later.
Unfortunately, what is actually happening is that the politicians and beaurocrats decide which policy they prefer for ideological reasons and then fund social scientists willing to produce "science" to justify the desision.
I'm not sure this is necessarily always true. There are absolutely certainly instances of this happening, but more and more governments are adopting "evidence-led policy" policies, and I'd hope that at least sometimes those policies do what they say on the tin. The UK has this: https://www.gov.uk/what-works-network and I'm going to try and do more reading up on it to see whether it looks like it's doing any good or just proving what people want it to prove.
It would certainly be preferable to live in a world where social scientists did good unbiased social science and then politicians listened to them. The question is, how do we change our current world into such a world? It certainly isn't by disparaging social science or assigning it low prestige. We need to make it so that science>ideology in prestige terms, which will be really tricky.
Yes; you'll get some politicians who actually want to reduce the crime rate and are willing to look for advice on how to do that effectively.
They're hard to spot, because all politicians want to look like that sort of politician, leaving the genuine ones hidden in a crowd of lookalikes...
People tried this in the late 19th/early 20th century (look up "technocracy" if you want to learn more). That's how we got into the mess we are in now.
Given that teachers who have a masters in education don't do better than teachers who haven't, I think there a good case of scrapping the current professors in that fields from their titles.
Given this fact, it gives very good support to an argument like "we should scrap Masters programs in education". But it could also give very good support to "we should try out a few variations on Masters programs in education to see if any of them would do better than the current one, and if we find one that actually works, we should change our current one to that thing. If and only if we try a bunch of different variations and none of them work, we should scrap Masters programs in education."
I mean, if we could create a program that consistently made people better teachers, that would be a very worthwhile endeavour. If our current program aiming to make people better teachers is utterly failing, maybe we should scrap that particular program, but surely we should also have a go at doing a few different programs and seeing if any of those succeed?
Who's responsible for creating such a program? The current professors. Given that they don't do so, we need different people.
It's not an argument for disregarding social science, but it is an argument to be more sceptical of its claims.
I disagree but let me qualify that. If we define "successful" as "socially successful", that is, e.g., you have your tenure and your papers are accepted in reasonable peer-reviewed journals, then yes, you do not need high IQ to be be successful social scientist.
However if we define "successful" as "actually advancing the state of human knowledge" then I feel fairly confident in thinking that a high IQ is even more of a necessity for a social scientists than it is for someone who does hard sciences.
As you pointed out yourself , hard sciences are easier :-)
Ah, I'm sorry - I actually agree with everything you just wrote. I fear I may have miscommunicated slightly in the comment you're replying to.
You're right, I did point that out. And I do think that it can be harder in social science to weed out the good stuff from the bad stuff, and as such, you can get reasonably far in social science terms by being well-spoken and having contacts with a similar ideology even if your science isn't great. This is an undesirable state of affairs, of course, but I think it's just because doing good social science is really difficult (and in order to even know what good social science looks like, you've gotta be smart enough to do good social science). It's part of the reason I think I can be useful and make a difference by doing social science, if I can do good rational social science and encourage others to do more rational social science.
My point isn't that you don't need to be as smart to do social science; doing it well is actually harder, so you'd expect social scientists to be at least as smart as hard scientists. I think that social science and hard science require slightly different kinds of intelligence, and IQ tests better for the hard science kind rather than the social science kind.
It's really difficult to make a formula that calculates how to get a rocket off the ground. You have to crunch a lot of numbers. However, once you've come up with that formula, it is easy to test it; when you fire your rocket, does it go to the moon or does it blow up in your face?
It's really easy to come up with a social science intervention/hypothesis. You just say "people from lower classes have worse life outcomes because of their poor opportunities (so we should improve opportunities for poor people)" or "people from lower classes are in the lower class because they're not smart, and their parents were not smart and gave them bad genes, so they have worse life outcomes because they're not smart (so we should do nothing)" or "people from lower classes have a culture of underachievement that doesn't teach them to work hard (so we should improve life/study skills education in poor areas)". I mean, coming up with one of those three is way easier than designing a rocket. However, once you've come up with them... how do you test it? How do you design a program to get people to achieve higher? Run an intervention program involving education and improved opportunities for years, carefully guarding against all the ideological biases you might have and the mess that might be made by various confounding factors, and still not necessarily have a clear outcome? There's not as much difficulty in hypothesis-generation or coming-up-with-solutions, but there's a lot more difficulty in hypothesis-testing and successful-solution-implementing.
Hard science requires more raw processing power to come up with theories; social science requires more un-biased-ness and carefulness in testing your theories. They're subtly different requirements and I think IQ is a better indicator of the former than the latter.
I've studied Spanish for some time and would be happy to converse with you. I'm not sure if you only want to converse with native speakers. I've been wanting to learn how to talk about LessWrongian stuff in Spanish.
--
Why do you dream of doing Human, Social and Political Sciences?
--
To me it sounds like you're an intense, inspired person who wants to make a great impact and has a start at a few plans for doing it. Way to go!
In otherwords, you're completely mindkilled about the topics in question and thus your opinions about them are likely to be poorly thought out. For example, when you think about, most of what is called "racism/sexism/etc." is actually perfectly valid Baysian inference (frequently leading to true conclusions that some people would prefer not to believe). As for AIDS, are you also angry at people opposing traditional morality since they also help spread AIDS?
Frankly, given your list, it looks like you merely stumbled up on the causes fashionable where you grew up and implicitly assumed that since everyone is so worked up about them they must be good causes. Consider that if you had grown up differently you would feel just as angry at anyone standing in the way of saving people's souls.
--
Ah, so you're a socialist?
Eh, I'm not sure I'm an anything-ist. Socialist ideas make a lot of sense to me, but really I'm a read-a-few-more-books-and-go-to-university-and-then-decide-ist. If I have to stand behind any -ist, it's going to be "scientist". I want to do research to find out which policies most effectively make people happy, and then I want to implement those policies regardless of whether they fall in line with the ideologies that seem attractive to me.
But yeah, I do think that it is morally wrong to let people suffer and morally right to make people happy, and I think you can create a lot of utility by taking money from people who already have a lot (leaving them with enough to buy food and maybe preventing them from going on holiday / buying a nice car) and giving it to people who have nothing (meaning they have enough money for food and education so they can survive and try and change their situation). So I agree with taxing people and using the money to provide universal healthcare, housing, food, etc. Apparently that makes me a socialist.
The correct term is social-democrat, actually. Among the different systems, social democracy has very rarely received full-throated support, but seems to have done among the best at handling the complexity of the values and value-systems that humans want to be materially represented in our societies.
(And HAHAHA!, finally I can just come out and say that without feeling the need to explain reams and reams of background material on both value-complexity and left-wing history!)
Oh, that's all well and good. I just tend to bring up socialism because I think that "left-wing politics" is more of a hypothesis space of political programs than a single such program (ie: the USSR), but that "bad vibes" in the West from the USSR (and lots and lots of right-wing propaganda) have tended to succeed in getting people to write off that entire hypothesis space before examining the evidence.
I do think that an ideally rational government would be "more" left-wing than right-wing, as current alignments stand, but I too think it would in fact be mixed.
Have some reading material!
<rolls eyes> ...among the various socio-political systems the one I prefer is the best one because it is the best... X-)
Actually, in voting and activism, I'm a full-throated socialist. Social democracy is weaksauce next to a fully-developed socialism, but we don't have a fully-developed socialism, so you're often stuck with the weaksauce.
And as an object-level defense: social democracy, as far as I can tell, does the best at aggregating value information about diverse domains of life and keeping any one optimization criterion from running roughshod over everything else that people happen to care about.
--
Every system that works is covert or overt meritocracy. Social democracy works, so ....
That would increase utility in the very short term, agreed. Of course, it would destroy the motivation to work, thus leading to a massive drop in utility shortly there after.
Well, "providing universal healthcare and welfare will lead to a massive drop in motivation to work" is a scientific prediction. We can find out whether it is true by looking at countries where this already happens - taxes pay for good socialised healthcare and welfare programs - like the UK and the Nordics, and seeing if your prediction has come true.
The UK employment rate is 5.6%, the United States is 5.3%. Not a particularly big difference, nothing indicating that the UK's universal free healthcare has created some kind of horrifying utility drop because there's no motivation to work. We can take another example if you like. Healthcare in Iceland is universal, and Iceland's unemployment rate is 4.3% (it also has the highest life expectancy in Europe).
This is not an ideological dispute. This is a dispute of scientific fact. Does taxing people and providing universal healthcare and welfare lead to a massive drop in utility by destroying the motivation to work (and meaning that people don't work)? This experiment has already been performed - the UK and Iceland have universal healthcare and provide welfare to unemployed citizens - and, um, the results are kind of conclusive. The world hasn't ended over here. Everyone is still motivated to work. Unemployment rates are pretty similar to those in the US where welfare etc isn't very good and there's not universal healthcare. Your prediction didn't come true, so if you're a rationalist, you have to update now.
Scandinavia and the UK are relatively ethnically homogenous, high-trust, and productive populations. Socialized policies are going to work relatively better in these populations. Northwest European populations are not an appropriate reference class to generalize about the rest of the world, and they are often different even from other parts of Europe.
Socialized policies will have poorer results in more heterogenous populations. For example, imagine that a country has multiple tribes that don't like each other; they aren't going to like supporting each other's members through welfare. As another example, imagine that multiple populations in a country have very different economic productivity. The people who are higher in productivity aren't going to enjoy their taxes being siphoned off to support other groups who aren't pulling their weight economically. These situations are a recipe for ethnic conflict.
Icelanders may be happy with their socialized policies now, but imagine if you created a new nation with a combination of Icelanders and Greeks called Icegreekland. The Icelanders would probably be a lot more productive than the Greeks and unhappy about needing to support them through welfare. Icelanders might be more motivated to work and pay taxes if it's creating a social safety net for their own community, but less excited about working to pay taxes to support Greeks. And who can blame them?
There is plenty of valid debate about the likely consequences of socialized policies for populations other than homogenous NW European populations. Whoever told you these issues were a matter of scientific fact was misleading you. This is an excellent example of how the siren's call of politically attractive answers leads people to cut corners during their analysis so it goes in the desired direction, whether they are aware they are doing it or not.
Generalizing what works for one group as appropriate for another is a really common failure mode through history which hurts real people. See the whole "democracy in Iraq" thing as another example.
I wasn't talking about providing people with universal healthcare. (That merely leads to a somewhat dysfunctional healthcare system). I was talking about taking so much from the "haves" that you "[prevent] them from going on holiday / buying a nice car".
Word of advice, try actually reading what I wrote before replying next time. Yes, I realize this is hard to do while one is angry; however, that's an argument for not using anger as your primary motivation.
Because the former is what a lot of other people using your rhetoric mean. And assuming that you mean what a lot of other people using your rhetoric mean is a reasonable assumption.
Also, even interpreting what you said as "I am angry about people beating LBGTQA+ individuals", it sounds like you are angry about it as long as it happens at all, regardless of its prevalence. Terrorism really happens too, but disproportionate anger against terrorism that ignores its prevalence has led to (or has been an excuse for) some pretty awful things.
--
False beliefs in equality are also responsible for millions of people being dead, and in fact have a much higher body-count then racism.
--
An excellent way to stop people from being killed is to make them strong or get them protected by someone who is strong. Strong in a broad sense here, from courage to coolness under pressure etc.
Here is a problem. To be a strong protector correlates with having the kind of transphobic and so on, long list of anti-social justice stuff or bigotry, because that list reduces to either disliking weakness or distrusting difference / having strong ingroup loyalty, and there is a relationship between these (a tribal warrior would have all).
Here is a solution. Basically moderate, reciprocal bigotocracy. Accept a higher-status, somewhat elevated i.e. clearly un-equal social role of the strong protector type i.e. that of traditional men, in return for them actively protecting all the other groups from coming to serious harm. The other groups will have to accept having lower social status, and it will be hard on their pride, but will be safer. This can be made official and perhaps more palatable by conscripting straight males, everybody claiming genderqueer status getting an exemption, and also after the service expecting some kind of community protection role, in return for higher elevated social status and respect. Note: this would be the basic model of most European countries up to the most recent times, status-patriarchy and male privilege explicitly deriving from the sacrifice of conscription.
This is not easy to swallow. However there seem to be not many other options. You cannot have strong protectors who are 100% PC because then they will have no fighting spirit. Without strong protectors, all you can hope is a utopia and hoping the whole Earth adopts it or else any basic tribe with gusto will take you over.
But I think a compromise model of not 100% complete equality and providing a proctor role in return should be able to work, as this has always been the traditional civilized model. In the recent years it was abandoned due to it being oppressive, and perhaps it was, but perhaps there is a way to find a compromise inside it.
Actually falsely believing in equality of ability => being willing to kill to make equality happen. The chain of reasoning goes as follows:
1) As we know all people/groups are of equal ability, but group X is more successful then other groups, thus they must be cheating in some way, we must pass laws to stop the cheating/level the playing field.
2) We passed laws to level the playing field but group X is still winning, they must be cheating in extremely subtle ways, we must pass more laws to stop/punish this.
3) Group X is still ahead, we must presume members of group X are guilty until proven innocent, etc.
No that's not what I'm saying. In the grandparent you said:
My point is that not being able to read IQ-by-race-and-gender studies is likely to lead to a repeat of Mao/Pol Pot. Thus being extremely concerned about being able to read them is a perfectly rational reaction.
Unfortunately, as we've just established you have very false ideas about how to go about doing that. Furthermore, since these same false ideas are currently extremely popular in academia, going there to study is unlikely to fix this.
The same is true for terrorism, but if someone came here saying "I'm really angry at terrorism and we have to do something", you'd be justified in thinking that doing what they want might not turn out well.
I'm sure we can agree that terrorism is bad, too. In fact, I'm sure we can agree that Islamic terrorism specifically is bad. So being really angry at it is likely to produce good results, right?
I am very angry about terrorism. I think terrorism is a very bad thing and we should eliminate it from the world if we can.
Being very angry about terrorism =/= thinking that a good way to solve the problem is to randomly go kill the entire population of the Middle East in the name of freedom (and oil). I hate terrorism and would prevent it if I could. In fact, I hate people killing each other so much, I think we should think rationally about the best way to eliminate it utterly (whilst causing fewer deaths than it causes) and then do that.
If you see someone else very angry about terrorism, though, wouldn't you think there's a good chance that they support (or can be easily led into supporting) anti-terrorism policies with bad consequences? Even if you personally can be angry at terrorism without wanting to do anything questionable, surely you recognize that is commonly not true for other people?
It's the same for racism.
Then why wasn't it included along with racism/sexism/etc. in your list of things your angry about in the ancestor?
You do realize no one thinks that. In particular that wasn't the position Jiro was arguing against.
Well, right here is a nice example:
Would you care to be explicit about the connection between IQ-by-race studies and genocide..?
There is no connection. I'm not trying to imply a connection. The only connection is that they are both things possibly implied by the word "racism".
I'm trying to say that when I say "I oppose racism", intending to signal "I oppose people beating up minorities", and people misunderstand badly enough that they think I mean "I oppose IQ-by-race studies", it disturbs me. If people know that "I oppose racism" could mean "I oppose genocide", but choose to interpret it as "I oppose IQ-by-race studies", that worries me. Those things are completely different and if you think that I'm more likely to oppose IQ-by-race studies than I am to oppose genocide, or if you think IQ-by-race studies are more important and worthy of being upset about than genocide, something has gone very wrong here.
A sentence like "I oppose racism" could mean a lot of different things. It could mean "I think genocide is wrong", "I think lynchings are wrong", "I think people choosing white people for jobs over black people with equivalent qualifications is wrong", or "I think IQ by race studies should be banned". Automatically leaping to the last one and getting very angry about it is... kind of weird, because it's the one I'm least likely to mean, and the only one we actually disagree about. You seriously want to reply to "I oppose racism" with "but IQ by race studies are valid Bayesian inference!" and not "yes, I agree that lynching people is very wrong"? Why? Are IQ by race studies more important to your values than eliminating genocide and lynchings? Do you genuinely think that I am more likely to oppose IQ-by-race studies than I am to oppose lynchings? The answer to neither of those questions should be yes.
That's because most people who say "I oppose racism" mean the latter, and no one except you means the former. That's largely because most people oppose beating people up for no good reason and thus they don't feel the need to constantly go about saying so.
Where you live is more then just your immediate family.
Well technically one could define "sexism" and "racism" however one wants; however, in practice that's not how most people who oppose them use the words.
That's because usually the individual does fit the trend. In fact these days people tend to under update for fear of being called "racist" and/or "sexist".
So are you also angry about what happened to Watson?
Are you also angry about people beating people without those psychological issues in dark alleys? The latter is much more common. Are you angry about, say, what happened in Rotherham and the ideology that lead to it being cover up? What about all the black on black violence in inner cities that no one seems to care about and cops don't want to stop for fear of being called "racist" when they disproportionately arrest black defendants.
Do you know what the word "hate" means? I've seen it thrown around to apply to lot's of situations where there is no actual hate involved. Furthermore, in the rare cases where I've seen actual hate, well like you yourself said latter "emotion is arational" and hate is sometimes appropriate.
Yet earlier you said "I'm against beatings and murder in general, really." Do you see the contradiction here? Do you some beatings and killings [your example wasn't murder since it was legal] even if they increase utility?
--
I agree they "seem" that way if you only superficially read the news. If you pay closer attention one notices that (at least in the US) fear of being precised as "racist" is a much larger cause of people being beaten up in dark alleys (and occasionally in broad daylight). It is the reason why cops don't want to police high crime (black) neighborhoods, why programs that successfully reduce crime (like stop and frisk) are terminated.
I would argue the exact opposite. Hatred and anger evolved as methods that let us pre-commit to revenge/punishment by getting around the "once the offense has happened it's no longer in one's interest to carry out the punishment" problem. They do this by sabotaging one's reasoning process to keep one from noticing that carrying out the punishment is not in one's interest. Applied against things, i.e., anything that can't be motivated by fear of punishment, all one gets is the partially sabotaged reasoning process without any countervailing benefits.
In fact, I don't think it's possible to be angry at a 'thing' like a disease. In order to do so one must either anthropomorphize the disease or actually get angry at some people (like say those people who refuse to give enough money to research for curing it).
"The more you believe you can create heaven on earth the more likely you are to set up guillotines in the public square to hasten the process." -- James Lileks
--
That thing:
Besides, we're talking about "more likely", not "inevitably".
--
Ask them, I'm not an altruist. But I heard it may have something to do with the concept of compassion.
Historically, it correlates quite well. You want to help the "good" people and in order to do this you need to kill the "bad" people. The issue, of course, is that definitions of "good" and "bad" in this context... can vary, and rather dramatically too.
If we take the metaphor literally, setting up guillotines in the public square was something much favoured by the French Revolution, not by Napoleon Bonaparte.
Bollocks. You want to change the world and change is never painless. Tearing down chunks of the existing world, chunks you don't like, will necessarily cause suffering.
The French Revolution wanted to design a better world to the point of introducing the 10-day week. Napoleon just wanted to conquer.
--
Don't mind Lumifer. He's one of our resident Anti-Spirals.
But, here's a question: if you're angry at the Bad, why? Where's your hope for the Good?
Of course, that's something our culture has a hard time conceptualizing, but hey, you need to be able to do it to really get anywhere.
There is a price to be paid. If you use fury and anger too much, you will become a furious and angry kind of person. Embrace the Dark Side and you will become one with it :-/
Maybe :-) The reason you've met a certain... lack of enthusiasm about your anger for good causes is because you're not the first kid who wanted to help people and was furious about the injustice and the blindness of the world. And, let's just say, it does not always lead to good outcomes.
There is historical precedent for groups advocating equality, altruism, and other humanitarian causes to do a lot of damage and start guillotining people. You would probably be horrified and step off the train before it got to that point. But it's important to understand the failure modes of egalitarian, altruistic movements.
The French Revolution, and Russian Revolution / Soviet Union ran into these failure modes where they started killing lots of people. After slavery was abolished in the US, around one quarter of the freed slaves died.
These events were all horrible disasters from a humanitarian perspective. Yet I doubt that the original French Revolutionaries planned from the start to execute the aristocracy, and then execute many of their own factions for supposedly being counter-revolutionaries. I don't think Marx ever intended for the Russian Revolution and Soviet Union to have a high death toll. I don't think the original abolitionists ever expected the bloody Civil War followed by 25% of the former slaves dying.
Perhaps, once a movement for egalitarianism and altruism got started, an ideological death spiral caused so much polarization that it was impossible to stop people from going overboard and extending the movement's mandate in a violent direction. Perhaps at first, they tried to persuade their opponents to help them towards the better new world. When persuasion failed, they tried suppression. And when suppression failed, someone proposed violence, and nobody could stop them in such a polarized environment.
Somehow, altruism can turn pathological, and well-intentioned interventions have historically resulted in disastrous side-effects or externalities. That's why some people are cynical about altruistic political attitudes.
My model is that these revolutions created a power vacuum that got filled up. Whenever a revolution creates a power vacuum, you're kinda rolling the dice on the quality of the institutions that grow up in that power vacuum. The United States had a revolution, but it got lucky in that the institutions resulting from that revolution turned out to be pretty good, good enough that they put the US on the path to being the world's dominant power a few centuries later. The US could have gotten unlucky if local military hero George Washington had declared himself king.
Insofar as leftist revolutions create worse outcomes, I think it's because since the leftist creed is so anti-power, leftists don't carefully think through the incentives for institutions to manage that power. So the stable equilibrium they tend to drift towards is a sociopathic leader who can talk the talk about egalitarianism while viciously oppressing anyone who contests their power (think Mao or Stalin). Anyone intelligent can see that the sociopathic leader is pushing cartoon egalitarianism, and that's why these leaders are so quick to go for the throats of society's intellectuals. Pervasive propaganda takes care of the rest of the population.
Leftism might work for a different species such as bonobos, but human avarice needs to be managed through carefully designed incentive structures. Sticking your head in the sand and pretending avarice doesn't exist doesn't work. Eliminating it doesn't work because avaricious humans gain control of the elimination process. (Or, to put it another way, almost everyone who likes an idea like "let's kill all the avaricious humans" is themselves avaricious at some level. And by trying to put this plan in to action, they're creating a new "defect/defect" equilibrium where people compete for power through violence, and the winners in this situation tend not to be the sort of people you want in power.)
--
Failure often comes with worse consequences than just an unchanged status quo.
You yourself are unlikely to start the French Revolution, but somehow, well-intentioned people seem to get swept up in those movements. Even teachers, doctors, and charity workers can contribute to an ideological environment that goes wrong; this doesn't mean that they started it, or that they supported it every step of the way. But they were part of it.
The French Revolution and guillotines is indeed a rarer event. But if pathological altruism can result in such large disasters, then it's quite likely that it can also backfire in less spectacular ways that are still problematic.
As you point out, many interventions to change the world risk going wrong and making things worse, but it would be a shame to completely give on making the world a better place. So what we really want is interventions that are very well-thought out, with a lot of care towards the likely consequences, taking into account the lessons of history for similar interventions.
You assume that studying politics in university tells you a good answer to that question. To me that doesn't seem true.
If you look at a figure like Julian Assange who actually plays and make meaningful moves, Assange didn't study politics at university.
Studying politics at Cambridge on the other hand will make it easier to become an elected politician in the UK. But that's not necessarily because of the content of lectures but because of networking.
It quite often happens that young people don't speak to older more experienced people when making their decisions about what to study. As your goal is making a difference in the world, it could be very useful to ask 80,000 for coaching to make that choice: https://80000hours.org/career-advice/ You might still come out of that with wanting to go to the same program in Cambridge but you will likely have better reasons for doing so and will be less naive.
--
Getting elected in the UK is certainly a valid move, but it comes with buying into the status quo to the extend that you hold opinions that make you fit into a major party.
I think the substantial discussion about Liquid Democracy doesn't happen inside the politics departments of universities but outside of them. A lot of 20th century and earlier political philosophy just isn't that important for building something new. It exists to justify the status quo and a place like Cambridge exists to justify the status quo.
Even inside Cambridge you likely want to spend time in student self-governance and it's internal politics.
--
To some degree, the idea of a "Friendship and Science Party" has already been tried. The Mugwumps wanted to get scholars, scientists and learned people more involved in politics to improve its corrupt state. It sounds like a great idea on paper, but this is what happened:
According to this account, the more contact science has with politics, the more corrupted it becomes.
--
I think you missed what I see as the main point in "What they might have considered, however, was that there was no valve in their pipe. Aiming to purify the American state, they succeeded only in corrupting the American mind." Not surprising, because Moldbug (the guy quoted about the Mugwumps) is terribly long-winded and given to rhetorical flourishes. So let me try to rephrase what I see as the central objection in a format more amenable to LW:
The scientific community is not a massive repository of power, nor is it packed to the gills with masters of rhetoric. The political community consists of nothing but. If you try to run your new party by listening to the scientific community without first making the scientific community far more powerful and independent, what's likely to happen is that the political community makes a puppet of the scientific community, and then you wind up running your politics by listening to a puppet of the political community.
To give a concrete relatable figure: The US National Science Foundation receives about 7.5 billion dollars a year from the US Congress. (According to the NSF, they are the funding source for approximately 24 percent of all federally supported basic research conducted by America's colleges and universities, which suggests 30 billion federal dollars are out there just for basic research)
The more you promote "Do what the NSF says", the more Congress is going to be interested in using some of those billions of dollars to lean on the NSF and other similar organizations so that you will be promoting "Do what Congress says" at arm's remove. No overt dishonesty needs be involved. Just little things like hiring sympathetic scientists, discouraging controversial research, asking for a survey of a specific metric, etc.
Suppose you make a prediction that a law will decrease the crime rate. You pass the law. You wait a while and see. Did the crime rate go down? Well, how are you measuring crime rate? Which crimes are you counting? To take an example discussed on Less Wrong a while ago, if you use the murder rate as proxy for crime rate over the past few decades, you are going to severely undercount crime because of improvements in medical technology that make worse wounds more survivable.
Obviously you can fix this particular metric now that I've pointed it out. But can you spot and fix such issues in advance faster and better than people throwing around 30 billion dollars and with a massive vested interest in retaining policy control?
When trying to solve something like whether P=NP, you can throw more and brighter scientists at the problem and trust that the problem will remain the same. But the problem of trying to establish science-based policy, particularly when "advocating loads of funding for science", gets harder as it gets more important and you throw more people at it. This is a Red Queen's Race where you have to keep running just to stay in place, because you're not dealing with a mindless question that has an objective answer floating out there, you're dealing with an opposed social force with lots of minds and money that learns from its own mistakes and figures out how to corrupt better, and with more plausible deniability.
Introduction comment, as requested.
I've been coming back to this site over and over again, for one or two years now I would say, for any number of topics, and today it dawned on me that there's something great about this site, the community / comments, and material, and that - maybe - I would like to become a part of it.
One email confirmation later, and the goal is achieved in its entirety.
Right, guys?
sigh
EDIT: One minor technical question... the comment system seems to be more or less a straight port from reddit, correct? But, unlike reddit, comment score starts at 0, it seems. Or did my other comment immediately receive a negative vote, seconds after going live?
Hail zanglebert.
The comments do indeed start at a null score. I also have noticed a "Powered by Reddit" icon in the lower right. That is the extent of my knowledge.
I joined a while ago but don't think I ever posted here. I'd lurked for quite some time here and at various blogs a degree or so separation away since before that. I've mostly link-hopped my way around the sequences and various pieces of fiction and followed folks on facebook and recently realized we had a local LW meetup. I'm happy to answer any questions about me, but never really know what kind of information would be relevant to put in an introductory post, so instead I thought I'd make a proposal instead:
I've seen (for a while) a lot of activity regarding AI / Singularities / Existential Risk within these groups of people. For my own part, I have pretty much no background knowledge when it comes to that. So I was looking to really dig into the book Superintelligence as a way to get a rudimentary understanding of it all.
That said, I find that I definitely get a lot more out of learning when I have people to discuss it with. So, with a bit of encouragement, since this is the "get-to-know-you" thread, I figured I'd to put a call out on here to see if there was anyone who might be interested in reading (or re-reading) the book along with me being skype buddies for this process.
My current plan is to go through it a chapter at a time and discuss / do further research / etc before moving on. Message me if that sounds like something you might be interested in doing!
~Kim
Finally bit the bullet and made an account-- hi people! I've been "LW adjacent" for a while now (meatspace friends with some prominent LWers, hang around Rationalist Tumblr/ Ozy's blog on the sidelines, seems like everyone I know has read HPMOR but me), and figured I ought to take the plunge.
Call me Vivs. I'm in my early twenties, currently doing odd jobs (temping, restaurant work, etc.) in preparation to start a Masters' this fall. I'm a historian, and would loooooove to talk history with any of you! (fans of Anne Boleyn/Thomas Cromwell/Victorian social peculiarities to the front of the line, please) I've always been that girl who pays waaaaay too much attention to if the magic system is internally consistent in a fantasy novel and gets overly irritated if my questions are brushed off with "But magic isn't real," so I have a feeling I'll like the way this site thinks, even if I'm way out of the median 'round these parts in a lot of ways.
Do you find D&D's cast-and-forget system consistent? It was borrowed from Jack Vance's Dying Earth novels, but those felt really weird novels to me.
No! I actually find D&Ds system super-frustrating, but then I hate having luck-based elements in magic systems. :P
Hi!
I just want to say I found Stefan Zweig's The World Of Yesterday really insightful about that. I used to think that kind of prudishness came from religion. According to Zweig, it was actually almost the opposite: it came from Enlightenment values, as in, trying really really hard to always act rationally (not 100% in our sense, but in the sense of: deliberately, thoughtfully, impassionately) and considered sexual instincts a far too dangerous, uncontrollable, passionate, "irrational" force, that is where it came from. Which suggests that Freud was the last Victorian, so to speak.
Hi back!
Actually, interestingly, some Victorian prudishness was encouraged by Victorian feminists, weirdly enough. Old-timey sexism said that women were too lustful and oozed temptation, hence why they should be excluded from the cool-headed realms of men (Arthurian legend is FULL of this shit, especially if Sir Gallahad is involved). Victorian feminists actually encouraged the view of women as quasi-asexual, to show that no, having women in your university was not akin to inviting a gang of succubi to turn the school into an orgy pit (this was also useful, as back then, there were questions on the morality of women). A lot of modern sexism actually has its roots not in anything ancient, but in a weird backlash of Victoriana.
Feminists of that era were practically moral guardians. In the USA, they closely allied with temperance movements and managed to secure the double victory of securing women's right to vote and prohibiting alcohol.
I can't track the reference right now, but I recall reading a transcript of a Parliamentary debate where they decided not to extend anti-homosexuality legislation to women on the grounds that women couldn't help themselves.
LOL. To quote Nobel Laureate Tom Hunt as of a couple of weeks ago:
This quote seems to have been intended as a joke and was taken out of context. A very flawed accuser: Investigation into the academic who hounded a Nobel Prize winning scientist out of his job reveals troubling questions about her testimony
One therefore wonders at man/man, woman/man and woman/woman troubles, which statistically should account for the majority of academic, er, troubles.
He's asserting that most troubles between men and women fall into a particular category. It might be that man/man troubles rarely fall into that category, and because most of that category is missing, are less numerous overall.
Well... Having once been infatuated with my supervisor and more than once reduced by him to tears even when my infatuation wore off, I can say this:
It's not people falling in love with people that really reduces group output. Being in love I worked like I would never do again.
It's people growing disappointed with people/goals, or having an actual life (my colleague quit her PhD when her husband lost his job, + they had a kid), or - God forbid! - competing for money. Now that's what I would call trouble.
Very good point! It's a ubiquitous stereotype, but it's not a priori clear to me that workplace romance leads to a net decrease in productivity, and I haven't seen real evidence for it. Google Scholar yielded nothing, it either ignores the search word "productivity" or just yields papers that report the cliché.
Uggghhhh.... that guy. I may not be a scientist, but I saw red when I read that.
I found that particular piece of stupidity particularly amusing since my field is upwards of 55 percent female (at my level - the old guard of people who have been in it since the 60s or 70s is more male) and I have worked in labs where I was the only man.
Welcome to LW! I suspect you'll find a lot of company here, at least as regards thinking in unwarranted detail about fictional magic systems.
What is this "unwarranted" thing you're talking about?
X-)
Thanks! I actually had a VERY long side discussion in an undergrad history course about whether stabbing a person possessed by a dybbuk creates a second dybbuk...
Hi, I'm new here. I find this site while looking for information about A.I. I read a few articles and couldn't help but smile to myself and think 'wasn't this what to the Internet was suppose to be. I had no idea this site existed and I'm honestly glad to have found stacks of future reading, you know that feeling. I never really post on sites and would have usually have lurked myself silly but I've been promted into action by a question. I posted this to reddit in the shower thoughts section because it seemed appropriate but I'd like to ask you (more).
I was reading about Orthogonality thesis, and Oracle A.I.'s as warnings and attempted precaution to potential hostile outcomes. I've recently finished Robots and Empires and couldn't help but think that something like the Zeroth law could further complicate trying to restrain A.I.'s with begin laws like do no harm or seemingly innocent tasks like acquire paper clips. To me it seemed trying to stop A.I.'s from harming us whilst also completing another task would always end up with us in the way. So I thought perhaps we should try to give the A.I. a goal that would not benefit from violence in anyway. Try to make it Buddha-like. To become all knowing and one with all things? Would a statement like that even mean anything to a computer? The one critism I receive was "what would be the point of that?" I don't know. But I'm curious.
What do you think?
I have bad news for you. People have described ideas for an AI that only seeks knowledge (though I can't find the best link to explain it now). I think this design would calmly kill us all to see what would happen, if we'd somehow prevented it from dropping an anvil on its own head.
To "become one with all things" does not seem sufficiently well-specified to stop either from happening. In general, if we can reasonably interpret the goal as something that's already true, then the AI will do nothing to achieve it (nothing being the most efficient action).
I by no means thought I had stumbled upon something. I was just curious to see what other people thought. I thought to be one with all things was a very ambiguous statement, I think what I was trying to get at was if the A.I. caused harm in some way it would therefore inhibit it from completing its primary goal by definition. And Buddha seemed the only example I could think of. Perhaps Plato's or Nietzche's versions of übermensch might fit better? Thank you for replying I look forward to being a part of this community
Thank you for this article. I'm finding it still difficult to navigate the site in terms of comments and posts. Would it be possible to edit some more explanation in the "site mechanics" portion of this article to include an explanation of what open threads are and how to use them?
Open threads are for things that aren't important enough for either a toplevel post or a discussion post. You use them just like you used the welcome thread: leave a comment and let people respond :)
Hello people.
I am brand new to this site and really to the topic of rationality in general. A friend recommended HPMOR to me a few months ago and I loved it. I then read Cialdini's 'Influence' on recommendation from these forums, and I am now reading Rationality: from AI to Zombies.
My background is in science, having studied oceanography at university, graduating about ten years ago. I am currently thinking about training as a science teacher. I look forward to becoming better acquainted with this topic, and being involved in the discussions.
Welcome! What level where you considering educating at?
Welcome! :D
Welcome!
Hello, all!
I’ve lurked this site on and off for at least five years, probably longer. I believe I first ran into it while exploring effective altruism. Articles that had a definite impact on my thinking included those on anchoring, priming, akrasia, and Newcomb's problem. Alicorn's Luminosity series is also up there, and I keep perpetual bookmarks to "The Least Convenient Possible World" and "Avoiding Your Belief's Real Weak Points."
I earned a B.A. in history, worked for a couple years in a financial planning office, then ended up on the rather weird track of becoming a professional piano accompanist. It turned out to be a far more financially and logistically feasible career move than the other grand idea I attempted at the time (convincing GiveWell I'd be an awesome hire). So piano is what I'm doing now. (GiveWell is admittedly still my longshot/backburner plan B, but I'm focusing all professional development on the music end of things right now).
Some things I've got more than a passing interest in, which I think fit the LW ethos:
Taubman approach. Approach to keyboard technique (and prevention of repetitive-motion-injury) that got the recognition and interdisciplinary interest of the scientific and medical communities. My personal experience is, "This shit works: it saved my wrists and music career," and the data indicates my experience isn't just anecdote or placebo effect.
Evaluating the effectiveness of charitable-giving interventions. I went to a highly conservative/libertarian college, where, if I wanted to donate to or support any poverty-alleviation program, I'd better be ready with a 95-point defense of my choice. Or else. It's been a continuing interest of mine ever since, appealing equally well to both my cynicism and idealism.
Finding secular alternatives to the community-building structures, motivational structures, and self-examination/self-change disciplines of religion.
Classical stoicism. Thus far I've found its framework and mindhacks to be a balanced, practical fit for my personality and temperament. I especially appreciate how it hasn't yet sent me into any extreme, detrimental pitfalls as I've tried to apply it. I'd be interested in meeting other people who are trying to methodically apply it to their lives, but I get the feeling we're probably a pretty quiet and weird bunch.
I likely won't comment here much, but I wanted to at least finally make an account, introduce myself, and let you all know I've found the site valuable over the years. I've been making a more concerted effort recently to seek out and connect with individuals who value things I value, and I figured it was high time to drop by the Less Wrong community, as a part of that.
Hello LW World!
I have been reading the writings of Eliezer Yudkowsky for about 2 years now, ever since a friend of mine introduced me to HPMOR. It continues to blow my mind that there is an entire movement and genre dedicated to reason. It's provided a depth of thought that I've always felt different from others for enjoying, and now I can happily say that there's a community for it.
I am currently an unemployed veteran and college dropout seeking to solve the financial problems which prevent me from currently completing my degree. I am halfway finished with an ultrasound tech school and I am also studying programming as a hobby. I'm proud of a lot of my work so far, from making the beginnings of an awesome game on Scratch to completing an advanced challenge on Hackerrank (technically it's incomplete, but it's only the timeout limit on large inputs that I have yet to find a solution for). I'm also learning web design skills on FreeCodeCamp where I have found very supportive mentors and hope to get a basic foot-in-the-door level of skills to gain employment.
What I REALLY wanted to do but failed at due to financial hardship is to work in neuroscience research. I'm more interested in the cybernetic side of turning science fiction into real scientific discoveries, but AI research is not a concept that I would turn away from, as I believe it has mutually beneficial applications to connect with neuroscience. Fingers crossed, I can either accomplish my goals toward neuroscience sooner rather than later or I can be lucky enough to survive to the point where aging is cured and widely distributed, giving me more than a lifetime to complete my goals.
The reason I'm posting today in particular is that I wanted to know if Reason, Cyberpunk, and Transhumanist themed poetry that I have created would have a place here. I'm thinking that I would like to have feedback from others who enjoy thinking critically about life. That said, the poetry I've made is an art form and would only expect to get feedback from rationalists to the extent that Reason is an art form. Perhaps any concern of that nature is really the result of a fallacious view of Reason that still clings to me as the "Hollywood Reason" concept that Eliezer described.
Regardless, what I have created is intended to be thought provoking and entertaining for individuals who often think of the intricate concepts that are on LessWrong. Any feedback that would help me to make them more thought provoking and entertaining would be a great help to improve them. Any advice on if there is an acceptable space for such a thing as well as advice on where to begin is appreciated in advance.
Welcome!
I don't think there's an official rule about poetry. Speaking as a person with over 9000 karma, my intuition is that it'd be well received if it has some novel ideas/perspective and was linked to from an open thread.
Hello everyone.
My name is Kabelo Moiloa, and I graduated from the Anglo-American School of Moscow three weeks ago. My deep interests are math, computer science and physics, in fact I might consider doing a series of posts here on Homotopy Type Theory, since I've been going through the HoTT Book. I first came to this website likely four years ago, so I don't remember well how it was. As I recall, I came here soon after I deconverted from Catholicism, and have found the discussions and content here fascinating ever since. For example, although I had already rejected theistic morality before reading the articles here, Fake Explanations allowed me to explain why. The idea that morality is, "intrinsic to the nature of God," is no more explanatory than "my confusion about this metal plate is explained by the phrase heat conduction." Additionally, the emphasis here on beating akrasia and achievement, lead me to pursue commitment devices, productivity systems etc., which have improved my ability to archive my goals, although unfortunately I only pursued these late in my senior year of high school. I was also exposed to Cognito Mentoring, which was quite useful.
I remember you, glad to hear it :-).
I love teaching, especially interacting with my students and their thinking, and I love philosophy, especially ethics. Understandably, I'm a philosophy teacher. I also enjoy politics, history, biology and the great outdoors.
Hey everyone!
I'm a long-time lurker of this site, but I haven't posted anything before. I've read all the sequences twice over the past few years, along with almost all non-sequence posts. The list of all posts was really not in an obvious location, but I eventually managed to find it!
So I'm new to the idea of actually communicating with people over the internet; I've never actually been a member of any forum before. Though I have a Reddit account, I've only made about ten posts in the year that I've been there. It's really weird; I often find myself thinking I have a response to something I read, then thinking "too bad I can't communicate with them!", completely forgetting that no, wait, I have an account expressly for that purpose.
I've decided that this pseudo-voyeurism of online communities has gone on long enough and decided to join. I don't know if I'll have anything to contribute, as I'm pretty critical of the value of my own ideas, to the point that I once tried to start a blog but decided that everything I could ever want to say has already been said, and I deleted the blog after one post. Maybe I need to impose a comment quota on myself?
In any case, I'm a physics grad student who mostly works in biophysics. I'm also interested in pure mathematics, philosophy, and computer science / artificial intelligence, though I procrastinate too much and don't really know more than the average CS minor. I plan on changing that at some point (he said, ironically).
Hello, everyone!
LW came to my attention not so long ago, and I've been commited to reading it since that moment about a month ago. I am a 20-year old linguist from Moscow, finishing my bachelor's. Due to my age, I've been pondering with usual questions of life for the past few years, searching for my path, my philosophy, essentially, a best way to live for me.
I studied a lot of religions, philosophies, and they all seemed really flat, essentially because of the reasons stated in some articles here. I came close to something resembling a nice way to live after I read "Atlas shrugged", but something about it bothered me, and after thorough analysis of this philosophy I decided to take some good things from it and move on, as I did a lot of times before.
I found this gem of a site through reddit and roko's basilisk (is it okay if I say it here? I heard discussion was banned). I am deeply into the whole idea of rationality and nearly all ideas that are presented on this site, but something really bothers me here, too.
The thing is that it is implied that altruism and rationality go hand in hand; maybe I missed some important articles that could explain me, why?
Let's imagine a hypothetical scenario: there is a guy, Steve, who really does not feel anything when he helps other people nor when does other "good" things generally; he does this only because his philosophy or religion tells them to. Say this guy was introduced to ideas of rationality and thus he is no longer bound by his philosophy/religion. And if Steve also does not feel bad about other people suffering (or even takes pleasure in it?)?
What i wanted to say is that rationality is a gun that can point both ways: and it is a good thing that LessWrong "sells" this gun with a safety mechanism (if it is such "safety mechanism". Once again, maybe I missed something really critical that explains why altruism and "being good" is the most rational strategy).
In other ways, Steve does not really care about humanity; he cares about his well-being and will utilize all knowledge he got just to meet his ends ( people are different, aren't they? and ends are different, too).
Or even another, average rationalist Jack estimated that his own net gain will be significantly bigger if he hurts or kills someone (considering his emotions and feelings about overall humanity net gain, and all other possible factors). That means he must carry on? Or is it a taboo here? Or maybe it is a problem of this site's demographics and nobody even considered this scenario (which fact I really doubt).
I feel that i dive too deep into metaphors, but i am not yet a good writer. I hope you understood my thought and can make me less wrong. :)
edit: fixed formatting
Welcome, Ozyrus.
This is moral philosophy you're getting into, so I don't think that there's a community-wide consensus. LessWrong is big, and I've read more of the stuff about psychology and philosophy of language than anything else, rather than the stuff on moral philosophy, but I'll take a swing at this.
It seems that your implicit question is, "If rationality makes people more effective at doing things that I don't value, then should the ideas of rationality be spread?" That depends on how many people there are with values that are inconsistent with yours, and it also depends on how much it makes people do things that you do value. And I would contend that a world full of more rational people would still be a better world than this one even if it means that there are a few sadists who are more effective for it. There are murderers who kill people with guns, and this is bad; but there are many, many more soldiers who protect their nations with guns, and the existence of those nations allow much higher standards of living than would be otherwise possible, and this is good. There are more good people than evil people in the world. But it's also true that sometimes people can for the first time follow their beliefs to their logical conclusions and, as a result, do things that very few people value.
Jack doesn't have to do anything. If 'rationality' doesn't get you what you want, then you're not being rational. Forget about Jack; put yourself in Jack's situation. If you had already made your choice, and you killed all of those people, would you regret it? I don't mean "Would you feel bad that all of those people had died, but you would still think that you did the right thing?" I mean, if you could go back and do it again, would you do it differently? If you wouldn't change it, then you did the right thing. If you would change it, then you did the wrong thing. Rationality isn't a goal in itself, rationality is the way to get what you want, and if being 'rational' doesn't get you what you want, then you're not being rational.
Excellent answer! Yes, you deducted the implicit question right. I also agree that this is a rather abstract field of moral philosophy, though i did not see that at first. Although I don't think that your argument for the world being a better place with everyone being rational holds up, especially this point
Even if there are, there is no proof that after becoming "rational" they will not become "bad" (apostrophes because bad is not defined sufficiently, but that'll do.). I can imagine some interesting prospect for experiments in this field by the way. I also think that the result will vary if the subject is placed in society of only-rationalists vs usual society - with "bad" actions carried out more in the second example, as there is much less room for cooperation.
But of course that is pointless discussion, as the situation is not really based on reality in any way and we can't really tell what will happen. :)
That is not so. There is a certain overlap between the population of rationalists and the population of altruists, people from this set intersection are unusually well represented on LW. But there is no "ought" here -- it's perfectly possible to be a non-altruist rationalist or to be a non-rational altruist.
Hi all,
I'm a recently graduated aerospace engineer. First came upon LW via HPMOR a couple years ago, been through the Sequences once since then, currently going through Rationality: A to Z mostly as a refresher.
Gravitated toward aerospace as a sort of proto existential risk mitigation effort, but having spoken with Nick Beckstead via 80,000 hours and comparing the potential of various fields to mitigate X-Risk within the next ~100 years which resulted in my discounting space development relative to other fields, currently more open to other avenues.
Very interested in learning more computer science, and applied mathematics more generally, but part of what makes me strongly prefer LW over other communities interested in the same is the strong focus on effective, economical implementation of ideas
HI. Curt Doolittle. I follow LW via Feedly, but today someone asked me to comment on a LW article. I write analytic philosophy in epistemology (specifically truth), ethics, law, politics and science. I'm reasonably well known and easy to find on the web.
Here is my response to the recent post on Signaling by Outliers (Hipster analogy). You can use it as a test of worthiness.
All, Thank you for asking me to respond. I'll convert it from signaling (the author's criticism and somewhat humorous demonstration of signaling), from moral justification, to scientific language, and I think it will be clearer:
1) All radicals do not fit into the center of the distribution - the statement is tautological, not insightful.
2) We all signal, and signaling is necessary for evolutionary reproductive selection.
3) The presumption of not fitting into some locus of the median of the distribution is a democratic one - that we are equal rather than (as I argue) we constitute a division of cognitive labor: perception, evaluation, knowledge and advocacy. (humans divide cognition more so than other creatures because we specialize in cognition.)
4) Our theories do tend to justify our social positions (signaling) but then, we would not have information necessary to theorize about any other set of interests, now would we?
5) The origin of theories is irrelevant (justification is false), and therefore the question of a theory produced by any subset of a polity can be judged by only criticism - its irrelevant who comes up with a theory.
The vast difference between pseudoscience and science in ethics, law, politics, and economics is captured those few words.
Now, to state the positive version: the solution to the fallacy of the enlightenment hypothesis of equality of ability, interest, and value is captured in these additional points:
6) economic velocity (wealth) is determined by the degree of suppression of parasitism (free riding/imposed costs). This eliminates transaction costs.
7) central power originates to centralize parasitism and increase material costs, by suppressing local parasitism and transaction costs. Once centralized they can be incrementally eliminated. If and only if an institutional means of following rules can be used to replace personal judgement.
8) The only means of producing institutional rules to replace personal judgement (provision of 'decidability') is in the independent, common, evolutionary law resting upon a prohibition on parasitism/free-riding/imposed costs (negatives), codified as property rights (positives): productive, warrantied, fully informed, voluntary transfer(exchange), free of negative externalities.
9) Language evolved to justify (morality), negotiate (deceive), and rally and shame (gossip), and only tangentially and late to describe (truth). Truth as we understand it is an invention and an unnatural one - which is why it is unique to the west, and why it has taken philosophers so long to understand it. However, westerners evolved a military epistemology because they relied upon self-financing warriors voluntarily participating, as well as the jury and truth telling. (The marginal difference in intellectual ability apparently not common - they were all smart enough. and such testimony was in itself 'training'.)
10) We cannot expect or demand truth from people unless they know how to produce it. ie: Education in what I would consider the religion of the west: "the true, the moral and the beautiful". So I consider this education 'sacred' not just utilitarian.
11) We cannot demand truth and law from people unless it is not against their interests: ie: the only universal political system is Nationalism, because groups can act truthfully internally, truthfully externally, and can use trade negotiations to neutralized competitive differences. And with nationalism, individuals cannot escape paying the cost of transforming their own societies, and themselves, and laying the burden of doing so upon other societies.
12) Commons are a profound competitive advantage. Territorial, institutional, normative, genetic, physical, and economic (industrial) commons are a profound advantage to any group. The west is the most successful producer of commons so it is even more important to the west. So we must provide a means of producing those commons. The difference between market for private goods and services (where competition in production is a good incentive) and corporate (public) goods, where we must prevent privatization of gains an socialization of losses, requires that we provide monopoly protection of those goods from consumption. But does not require that we provide monopoly contribution to them. Commons require only that the people willing to pay for them, do so. Otherwise there is no demonstrated preference for that commons. Insurance is a commons and I will leave that for another time. Return on investment (dividends) are the product of commons. I will leave that for another time as well. The central point is that we can produce a market for common goods using government just as we do in the market private goods. But that law and commons are two different things. and that there is no reason whatsoever, knowing how to construct the common law, that government should be capable of producing law. it cannot. Law is. It cannot be created. Only identified.
(This is also probably the most profound 1000 words on politics that you will be able to find at this moment in time)
propertarianism
Curt Doolittle The Propertarian Institute
Which one?
Hi All, I live at the LW Boston house, the Citadel. My undergrad and grad was in Biology, and I am switching into programming. I am interested in psychology and cognitive biases. I value self-improvement and continuous learning. I recently started blogging at https://evolvingwithtechnology.wordpress.com.
Hello, my name is Daniel.
I've wanted to join the rationality community for a little while now, and I finally worked up the courage after a brief but informative discussion with Anna Salamon, CFAR's executive director (who was as kind as I was nervous).
I'm working on finishing up a B.S. in Electrical Engineering, and I plan on continuing to a doctorate in some branch of decision or control theory. I also study philosophy, fiction writing, and computer science.
Since becoming aware of rationality in general, and Eliezer Yudkowsky's way of making everything make sense, I've gotten pretty heavily into cognitive psychology and metacognition.
To be frank, I understand that I'm a rank amateur in the field of rationality in general, but I'm looking forward to trying to get better. So if you're downvoting me, or even upvoting me, explaining why in a comment or message would be extremely helpful, so I can take the time to reinforce my positive cognitive pathways, and prune my negative ones.
See you in the threads!
Well met!
My name is Fox, and I am an actor and magician...well...In actuality, I guess those are both the same thing. I know how you all love concision, so I'll try again...* ahem *
Well met!
My name is Fox, and I am a liar. Empathetic to a fault, highly spiritual, and emotionally driven--still an emo boy at heart--I live as far from consciously as it gets. My main passions are girls, music, and service to others. Core values are: love, kindness, beauty, passion, immersion, and evolution.
For the past year I have studied and practiced magick. It is very real to me and has been the lens through which I view the universe as well as my primary method for navigating life. I have enjoyed many experiences and even some progress, living as such. ...for a time.
Lately, I have just been working and preparing to reenter school. I find this "being an adult" business baffling and struggle with finances. Throw intangibles into the mix and it's untenable. Which brings me to why I am here: I need to implement rational-thinking as my default state-of-mind to help me with goal-oriented decision making and getting a grasp on the most elusive concept in the world to me: self-discipline
-Reverend Mother Gaius Helen Mohiam during the testing of Paul Atreides in Dune.
So LW...will you be my Gom Jabbar??
Immersion!!! Do you clean your objectives with xylene?
plus
8-0 Prediction: you will have a very confused and maybe fun trip.