(This is the fifth incarnation of the welcome thread; once a post gets over 500 comments, it stops showing them all by default, so we make a new one. Besides, a new post is a good perennial way to encourage newcomers and lurkers to introduce themselves.)
A few notes about the site mechanics
Less Wrong
comments are threaded for easy following of multiple conversations. To respond to any comment, click the "Reply" link at the bottom of that comment's box. Within the comment box, links and formatting are achieved via
Markdown syntax (you can click the "Help" link below the text box to bring up a primer).
You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it's part of what makes discussions on Less Wrong look different from those anywhere else on the Internet.
However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.) Take note when you're downvoted a lot on one topic, as it often means that several members of the community think you're missing an important point or making a mistake in reasoning— not just that they disagree with you! If you have any questions about karma or voting, please feel free to ask here.
Replies to your comments across the site, plus
private messages from other users, will show up in your
inbox. You can reach it via the little mail icon beneath your karma score on the upper right of most pages. When you have a new reply or message, it glows red. You can also click on any user's name to view all of their comments and posts.
Discussions on Less Wrong tend to end differently than in most other forums; a surprising number end when one participant changes their mind, or when multiple people clarify their views enough and reach agreement. More commonly, though, people will just stop when they've better identified their deeper disagreements, or simply "tap out" of a discussion that's stopped being productive. (Seriously, you can just write "I'm tapping out of this thread.") This is absolutely OK, and it's one good way to avoid the flamewars that plague many sites.
EXTRA FEATURES:
There's actually more than meets the eye here: look near the top of the page for the "WIKI", "DISCUSSION" and "SEQUENCES" links.
LW WIKI: This is our attempt to make searching by topic feasible, as well as to store information like
common abbreviations and idioms. It's a good place to look if someone's speaking Greek to you.
LW DISCUSSION: This is a forum just like the top-level one, with two key differences: in the top-level forum, posts require the author to have 20 karma in order to publish, and any upvotes or downvotes on the post are multiplied by 10. Thus there's a lot more informal dialogue in the Discussion section, including some of the more fun conversations here.
SEQUENCES: A huge corpus of material mostly written by Eliezer Yudkowsky in his days of blogging at Overcoming Bias, before Less Wrong was started. Much of the discussion here will casually depend on or refer to ideas brought up in those posts, so reading them can really help with present discussions. Besides which, they're pretty engrossing in my opinion.
A few notes about the community
If you've come to Less Wrong to discuss a particular topic, this thread would be a great place to start the conversation. By commenting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.
If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation; we can fit many more welcomes onto each thread if fewer of them sprout 400+ comments. (To do this: click "Create new article" in the upper right corner next to your username, then write the article, then at the bottom take the menu "Post to" and change it from "Drafts" to "Less Wrong Discussion". Then click "Submit". When you edit a published post, clicking "Save and continue" does correctly update the post.)
If you want to write a post about a LW-relevant topic, awesome! I highly recommend you submit your first post to Less Wrong Discussion; don't worry, you can later promote it from there to the main page if it's well-received. (It's much better to get some feedback before every vote counts for 10 karma- honestly, you don't know what you don't know about the community norms here.)
If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter
A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.
A list of some posts that are pretty awesome
I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:
More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site!
Note from orthonormal: MBlume and other contributors wrote the original version of this welcome post, and I've edited it a fair bit. If there's anything I should add or update on this post (especially broken links), please send me a private message—I may not notice a comment on the post. Finally, once this gets past 500 comments, anyone is welcome to copy and edit this intro to start the next welcome thread.
Comments (1750)
Hi Everyone! I'm AABoyles (that's true most places on the internet besides LW).
I first found LW when a colleague mentioned That Alien Message over lunch. I said something to the effect of "That sounds like an Arthur C. Clarke short story. Who is the author?" "Eliezer Yudkowsky," He said, and sent me the link. I read it, and promptly forgot about it. Fast forward a year, and another friend posts the link to HPMOR on Facebook. The author's name sounded very familiar. I read it voraciously. I subscribed to the Main RSS feed and lurked for a year.
I joined the community last month because I wanted to respond to a specific discussion, but I've been having a lot of fun since I got here. I'm interested in finding ways to achieve the greatest good (read: reducing the number of lost Disability Adjusted Life Years), including Effective Altruism and Global Catastrophic Risk Reduction.
Hello community.
I've been aware of LW for a while, reading individual posts linked in programmer/engineering hangouts now and then, and I independently came across HPMOR in search of good fanfiction. But the decision to un-lurk myself came after I attended a CFAR workshop (a major positive life change) and realized that I want to keep being engaged with the community.
I'm very interested in anti-aging research (both from the effective altruism point of view, and because I find the topic really exciting and fascinating) and want to learn about it in as much depth as time permits. So far I would come across science articles about single related discoveries in specialized fields (molecular biology, brain science, ... ) but I haven't found a good resource (book, coursera course, whatever) where I can learn the necessary medicine/biology background and how it all comes together in the current state of the art (I'm thinking of something similar to all the remarkable physics books we have on the market). Any pointers are appreciated.
Hi I'm N. Currently a systems engineer. Lurked for sometime and finally decided to create an account. I am interested in mathematics and computer science and typography. Fonts can give me happiness or drive me crazy.
I am currently in SoCal.
This account is used by a VA to post events for the Melbourne Meetup group. Comment is to accrue 2 karma to allow posting.
I chose more_wrong as a name because I'm in disagreement with a lot of the lesswrong posters about what constitutes a reasonable model of the world. Presumably my opinions are more wrong than opinions that are lesswrong, hence the name :)
My rationalist origin story would have a series of watershed events but as far as I can tell, I never had any core beliefs to discard to become rational, because I never had any core beliefs at all. Do not have a use for them, never picked them up.
As far as identifying myself as an aspiring rationalist, the main events that come to mind would be: 1. Devouring as a child anything by Isaac Asimov that I could get my hands on. In case you are not familiar with the bulk of his work, most of it is scientific and historical exposition, not his more famous science fiction; see especially his essays for rationalist material.
Working on questions in physics like "Why do we call two regions of spacetime close to each other?", that is, delving into foundational physics.
Learning about epistemology and historiography from my parents, a mathematician and a historian.
Thinking about the thinking process itself. Note: Being afflicted with neurological and psychological conditions that shut down various parts of my mentality, notably severe intermittent aphasia, has given me a different perspective on the thinking process.
Making some effort to learn about historical perspectives on what constitutes reason or rationality, and not assuming that the latest perspectives are necessarily the best.
I could go on but that might be enough for an intro.
My hope is to both learn how to reason more effectively and, if fortunate, make a contribution to the discussion group that helps us to learn the same as a community. mw
Hello there! I really enjoyed HPMOR, because it expanded on some of my thoughts and made me feel less alone. I joined now to post a realization about Harry's (and my) personality. See my 1st post.
Hi! I read some articles here a few years ago, decided they were good, and moved on. I think I am a pretty practical person, and I have some ways of deciding things that are utility based (and some that are not).
I would like to ask the community for some help with a couple of reading recommendations:
Thanks very much, and I hope to be optimizing more things soon. It's nice to meet you!
Hi, I'm reposting my introduction here from 2 days ago, as it was moved for some reason, perhaps accidentally. Anyway, hello, my name is Zoltan Istvan. I'm a transhumanist, futurist, journalist, and the author of the philosophical novel "The Transhumanist Wager." I've been checking out this site for some time, but decided to create an account a few days ago to become closer to the community. I thought I'd start by posting an essay I recently wrote, which sums up some of my ideas. Feel free to share it if you like, and I look forward to interacting here. Cheers.
"When Does Hindering Life Extension Science Become a Crime—or even Genocide?"
Every human being has both a minimum and a maximum amount of life hours left to live. If you add together the possible maximum life hours of every living person on the planet, you arrive at a special number: the optimum amount of time for our species to evolve, find happiness, and become the most that it can be. Many reasonable people feel we should attempt to achieve this maximum number of life hours for humankind. After all, very few people actually wish to prematurely die or wish for their fellow humans' premature deaths.
In a free and functioning democratic society, it's the duty of our leaders and government to implement laws and social strategies to maximize these life hours that we want to safeguard. Regardless of ideological, political, religious, or cultural beliefs, we expect our leaders and government to protect our lives and ensure the maximum length of our lifespans. Any other behavior cuts short the time human beings have left to live. Anything else becomes a crime of prematurely ending human lives. Anything else fits the common legal term we have for that type of reprehensible behavior: criminal manslaughter.
In 2001, former President George W. Bush restricted federal funding for stem cell research, one of the most promising fields of medicine in the 21st Century. Stem cells can be used to help fight disease and, therefore, can lengthen lives. Bush restricted the funding because his conservative religious beliefs—some stem cells came from aborted fetuses—conflicted with his fiduciary duty of helping millions of ailing, disease-stricken human beings. Much medical research in the United States relies heavily on government funding and the legal right to do the research. Ultimately, when a disapproving President limits public resources for a specific field of science, the research in that field slows down dramatically—even if that research would obviously lengthen and improve the lives of millions.
It's not just politicians that are prematurely ending our lives with what can be called "pro-death" policies and ideologies. In 2009, on a trip to Africa, Pope Benedict XVI told journalists that the epidemic of AIDS would be worsened by encouraging people to use condoms. More than 25 million people have died from AIDS since the first cases began being reported in the news in the early 1980s. In numerous studies, condoms have been shown to help stop the spread of HIV, the virus that causes AIDS. This makes condoms one of the simplest and most affordable life extension tools on the planet. Unfathomably, the billion-person strong Catholic Church actively supports the idea that condom usage is sinful, despite the fact that such a malicious policy has helped sicken and kill a staggering amount of innocent people.
Regrettably, in 2014, America continues to be permeated with an anti-life extension culture. Genetic engineering experiments in humans often have to pass numerous red-tape-laden government regulatory bodies in order to conduct any tests at all, especially at publically funded universities and research centers. Additionally, many states still ban human reproductive cloning, which could one day play a critical part in extending human life. The current US administration is also culpable. The White House is simply not doing enough to extend American lifespans. The US Government spends just 2% of the national budget on science and medical research, while their defense budget is over 20%, according to a 2011 US Office of Management Budget chart. Does President Obama not care about this fact, or is he unaware that not actively funding and supporting life extension research indeed shortens lives?
In my philosophical novel The Transhumanist Wager, there is a scene which takes place outside of a California courthouse where transhumanist activists are holding up a banner. The words inscribed on the banner sum up some eye-opening data: "By not actively funding life extension research, the amount of life hours the United States Government is stealing from its citizens is thousands of times more than all the American life hours lost in the Twin Towers tragedy, the AIDS epidemic, and the Vietnam War combined. Demand that your government federally fund transhuman research, nullify anti-science laws, and promote a life extension culture. The average human body can be made to live healthily and productively beyond age 150."
Some longevity experts think that with a small amount of funding—$50 billion dollars—targeted specifically towards life extension research and ending human mortality, average human lifespans could be increased by 25-50 years in about a decade's time. The world's net worth is over $200 trillion dollars, so the species can easily spare a fraction of its wealth to gain some of the most valuable commodities humans have: health and time.
Unfortunately, our species has already lost a massive amount of life hours; billions of lives have been unnecessarily cut short in the last 50 years because of widespread anti-science attitudes and policies. Even in the modern 21st Century, our evolutionary development continues to be significantly hampered by world leaders and governments who believe in non-empirical, faith-driven religious doctrines—most of which require the worship of deities whose teachings totally negate the need for radical life extension science. Virtually every major leader on the planet believes their "God" will give them an afterlife in a heavenly paradise, so living longer on planet Earth is just not that important.
Back in the real world, 150,000 people died yesterday. Another 150,000 will cease to exist today, and the same amount will disappear tomorrow. A good way to reverse this widespread deathist attitude should start with investigative government and non-government commissions examining whether public fiduciary duty requires acting in the best interest of people's health and longevity. Furthermore, investigative commissions should be set up to examine whether former and current top politicians and religious leaders are guilty of shortening people's lives for their own selfish beliefs and ideologies. Organizations and other global leaders that have done the same should be scrutinized and investigated too. And if fault or crimes against humanity are found, justice should be administered. After all, it's possible that the Catholic Church's stance on condoms will be responsible for more deaths in Africa than the Holocaust was responsible for in Europe. Over one million AIDS victims died in Africa last year alone. Catholicism is growing quickly in Africa, and there will soon be nearly 200 million Catholics on the continent. Obviously, the definition of genocide needs to be reconsidered by the public.
As a civilization of advanced beings who desire to live longer, better, and more successfully, it is our responsibility to put government, religious institutions, big business, and other entities that endorse pro-death policies on notice. Society should stand ready to prosecute anyone that deliberately promotes agendas and actions that prematurely end people's useful lives. Stifling or hindering life extension science, education, and practices needs to be recognized as a legitimate crime.
Hi, My name is Zoltan Istvan. I'm a transhumanist, futurist, journalist, and the author of the philosophical novel "The Transhumanist Wager." I've been checking out this site for some time, but decided to create an account today to become closer to the community. I thought I'd start by posting an essay I recently wrote, which sums up some of my ideas. Feel free to share it if you like, and I hope you find it moving. Cheers.
"When Does Hindering Life Extension Science Become a Crime—or even Genocide?"
Every human being has both a minimum and a maximum amount of life hours left to live. If you add together the possible maximum life hours of every living person on the planet, you arrive at a special number: the optimum amount of time for our species to evolve, find happiness, and become the most that it can be. Many reasonable people feel we should attempt to achieve this maximum number of life hours for humankind. After all, very few people actually wish to prematurely die or wish for their fellow humans' premature deaths.
In a free and functioning democratic society, it's the duty of our leaders and government to implement laws and social strategies to maximize these life hours that we want to safeguard. Regardless of ideological, political, religious, or cultural beliefs, we expect our leaders and government to protect our lives and ensure the maximum length of our lifespans. Any other behavior cuts short the time human beings have left to live. Anything else becomes a crime of prematurely ending human lives. Anything else fits the common legal term we have for that type of reprehensible behavior: criminal manslaughter.
In 2001, former President George W. Bush restricted federal funding for stem cell research, one of the most promising fields of medicine in the 21st Century. Stem cells can be used to help fight disease and, therefore, can lengthen lives. Bush restricted the funding because his conservative religious beliefs—some stem cells came from aborted fetuses—conflicted with his fiduciary duty of helping millions of ailing, disease-stricken human beings. Much medical research in the United States relies heavily on government funding and the legal right to do the research. Ultimately, when a disapproving President limits public resources for a specific field of science, the research in that field slows down dramatically—even if that research would obviously lengthen and improve the lives of millions.
It's not just politicians that are prematurely ending our lives with what can be called "pro-death" policies and ideologies. In 2009, on a trip to Africa, Pope Benedict XVI told journalists that the epidemic of AIDS would be worsened by encouraging people to use condoms. More than 25 million people have died from AIDS since the first cases began being reported in the news in the early 1980s. In numerous studies, condoms have been shown to help stop the spread of HIV, the virus that causes AIDS. This makes condoms one of the simplest and most affordable life extension tools on the planet. Unfathomably, the billion-person strong Catholic Church actively supports the idea that condom usage is sinful, despite the fact that such a malicious policy has helped sicken and kill a staggering amount of innocent people.
Regrettably, in 2014, America continues to be permeated with an anti-life extension culture. Genetic engineering experiments in humans often have to pass numerous red-tape-laden government regulatory bodies in order to conduct any tests at all, especially at publically funded universities and research centers. Additionally, many states still ban human reproductive cloning, which could one day play a critical part in extending human life. The current US administration is also culpable. The White House is simply not doing enough to extend American lifespans. The US Government spends just 2% of the national budget on science and medical research, while their defense budget is over 20%, according to a 2011 US Office of Management Budget chart. Does President Obama not care about this fact, or is he unaware that not actively funding and supporting life extension research indeed shortens lives?
In my philosophical novel The Transhumanist Wager, there is a scene which takes place outside of a California courthouse where transhumanist activists are holding up a banner. The words inscribed on the banner sum up some eye-opening data: "By not actively funding life extension research, the amount of life hours the United States Government is stealing from its citizens is thousands of times more than all the American life hours lost in the Twin Towers tragedy, the AIDS epidemic, and the Vietnam War combined. Demand that your government federally fund transhuman research, nullify anti-science laws, and promote a life extension culture. The average human body can be made to live healthily and productively beyond age 150."
Some longevity experts think that with a small amount of funding—$50 billion dollars—targeted specifically towards life extension research and ending human mortality, average human lifespans could be increased by 25-50 years in about a decade's time. The world's net worth is over $200 trillion dollars, so the species can easily spare a fraction of its wealth to gain some of the most valuable commodities humans have: health and time.
Unfortunately, our species has already lost a massive amount of life hours; billions of lives have been unnecessarily cut short in the last 50 years because of widespread anti-science attitudes and policies. Even in the modern 21st Century, our evolutionary development continues to be significantly hampered by world leaders and governments who believe in non-empirical, faith-driven religious doctrines—most of which require the worship of deities whose teachings totally negate the need for radical life extension science. Virtually every major leader on the planet believes their "God" will give them an afterlife in a heavenly paradise, so living longer on planet Earth is just not that important.
Back in the real world, 150,000 people died yesterday. Another 150,000 will cease to exist today, and the same amount will disappear tomorrow. A good way to reverse this widespread deathist attitude should start with investigative government and non-government commissions examining whether public fiduciary duty requires acting in the best interest of people's health and longevity. Furthermore, investigative commissions should be set up to examine whether former and current top politicians and religious leaders are guilty of shortening people's lives for their own selfish beliefs and ideologies. Organizations and other global leaders that have done the same should be scrutinized and investigated too. And if fault or crimes against humanity are found, justice should be administered. After all, it's possible that the Catholic Church's stance on condoms will be responsible for more deaths in Africa than the Holocaust was responsible for in Europe. Over one million AIDS victims died in Africa last year alone. Catholicism is growing quickly in Africa, and there will soon be nearly 200 million Catholics on the continent. Obviously, the definition of genocide needs to be reconsidered by the public.
As a civilization of advanced beings who desire to live longer, better, and more successfully, it is our responsibility to put government, religious institutions, big business, and other entities that endorse pro-death policies on notice. Society should stand ready to prosecute anyone that deliberately promotes agendas and actions that prematurely end people's useful lives. Stifling or hindering life extension science, education, and practices needs to be recognized as a legitimate crime.
Do you apply this stirring declaration to the beginning of a life as well as to the end of one?
First, let me just say that the essay is designed to provoke and challenge, while also aiming to move the idea forward in hopes life extension can be taken more seriously. I realize the incredible difficulties and violations of freedom, as the ideas in the essay would require. But to answer your question, I tend to concentrate on "useful" lives, so the declaration would not apply to the beginning of life, but rather to those lives that are already well under way.
A tiny minority group such as transhumanists should not make threats against the powers that be.
He's making it himself, not as a spokesperson of the movement. However, as a transhumanist myself, I can't say I disagree with him. Morally speaking, when does not only actively hindering, but choosing to not vehemently pursue, life extension research constitute a threat on our lives?
Maybe it is time (or if not, it will be very soon) for transhumanism and transhumanists to enter the public sphere, to become more visible and vocal.
We have the capacity, for the first time in human history, to potentially end death, and not for our progeny but for ourselves, now. Yet we are disorganized, spread thin, essentially invisible in terms of public consciousness. People are having freakouts about something as mundane as Google Glass: We are talking about the cyberization or gross genetic manipulation of our bodies, increasing life spans to quickly approach "indefinite", etc., and not in some distant future, but in the next twenty or thirty years.
We are being held back by lack of funding, poor cohesion, and a general failure of imagination, and that is largely our own fault for being content to be quiet, to remain a fringe element, optimistically debating and self-congratulating in nooks and niches of various online communities, bothering and being bothered by few if any.
I believe it is our moral imperative to, now that is is possible, pursue life extension with every cent and scrap of resources we have available to us. To do otherwise is reprehensible.
http://www.nickbostrom.com/fable/dragon.html
Let Mr. Istvan make his threats, as long as it gets people talking about us.
This means taking a consequentialist public relations strategy. Imagine that group X advocates Y, and you know little about X and based on superficial analysis Y seems somewhat silly. How would your opinion of group X change if you find members of this group want to "prosecute anyone" who stands in the way of Y?
Hi, Thanks for the response. I should be clear; transhumanists are not making the threat. I'm making it myself. And I'm doing it as publicly and openly as possible so there can be no misunderstanding:
http://www.psychologytoday.com/blog/the-transhumanist-philosopher/201401/when-does-hindering-life-extension-science-become-crime
http://ieet.org/index.php/IEET/more/istvan20140131
The problem is that lives are on the line. So I feel someone needs to openly state what seems to be quite obvious. Thanks for considering my thoughts.
My name is Morgan. I was brought here by my brother and have been lurking for awhile. I've have read most of the sequences which have cleared up some of my confused thinking. There were things that I didn't think about because I didn't have an answer for them. Free will and morality used to confuse me and so I never thought much about them since I didn't have a guarantee that they were answerable.
Lesswrong has helped me get back into programming. It has helped me learn to think about things with precision. And to understand how an Cognitive algorithm feels from the inside to dissolve questions.
I am going to join this community and improve my skills. Tsuyoku Naritai.
Greetings!
I'm Brian. I'm a full-time police dispatcher and part-time graduate student in the marriage and family therapy/counseling master's degree program at the University of Akron (in northeast Ohio). Before I began studies in my master's program, I earned a bachelor's degree in emergency management. I am an atheist and skeptic. I think I can trace my earliest interest in rationality back to my high school days, when I began critically examining theism (generally) and Catholicism (in particular) while taking an elective religion class called "Questions About God." It turned out the class raised more questions than answers, for me.
I found LessWrong by way of browsing CFAR's website and wishing that I had the money to attend one of their workshops. With that being said, I haven't been lurking around LW proper for very long. Thus, I anticipate it will take some time for me to become acquainted with norms of this platform. However, after briefly browsing around, I get the sense that this is a thoughtful community of people that value rationality. That's exciting to me! I hope to get more involved, as time permits, and to eventually become a valuable contributor.
Hello,
I'm a 34 yo programmer/entrepreneur in Romania, with a long time interest in rationality - long before I called it by that name. I think the earliest name I had for it was "wisdom", and a desire to find a consistent, repeatable way to obtain it. Must admit at that time I didn't imagine it was going to be so complicated.
Spent some of my 20s believing I already know everything, and then I made a decision that in retrospect was the best I ever made: never to look at the price when I buy a book, but only at the likelihood of finishing it. Which is something I strongly recommend even (or especially) to cash-starved students. The first one happened to be Nassim Taleb's Black Swan, which was another huge stroke of luck. Not only it exposed me to some pretty revolutionary concepts and destroyed my illusions of omniscience, but he's a frequent name dropper and provided a lot of leads for future reading material. And the rest, as they say, is history.
Introduction aside, I'm a long time lurker and I actually came here with a request for comments. There is an often mentioned thought experiment in the sequences that compares a lot of harm done to a person (like torture) with minimum harm done to a lot of people, like a mote in the eye of a billion people. I've always found it a bit disturbing, but couldn't escape the conclusion that harm is additive and comparable. Except I now think it's not.
I've recently read Anti-fragile and found the concept of "hormesis", i.e. small harm done to a complex system generates an over-compensatory response, resulting in overall improvement. Simple examples: cold showers or lifting weights. So small harm done to a lot of people is possible to overall have net positive effects.
Two holes I see in this argument: some harms like going to the gym create hormesis, while motes in the eye don't. Also, you could just up the harm: use a big enough mote that the overall effect is a net negative, like maybe cause some permanent damage. But both holes are plugged by the fact that complex systems will always find ways to compensate. Small cornea damage gets compensated at processing level, muscle damage turns into new muscle, neuron damage means rerouting etc. There are tipping points and limits, but they're still counter-intuitive. Killing the n-th neuron will put somebody in a wheelchair, but their happiness level still bounces back. There is harm, but it's very non-linear in respect to the original damage. So I can't help but conclude that harm is simply non-additive and non-comparable, at least not easily.
Hello,
I'd like to get some opinions about my future goals.
I'm 21 and I'm a second-year student of engineering in Prague, Czech Republic, focusing mainly on math and then physics.
My background is not stunning - I was born in 93, visiting sporting primary school and then general high school. Until I was in second year of high school, I behaved as an idiot with below-average results in almost everything, paradoxically except extraordinary "general study presupposes" (whatever it means). My not so bad IQ - according to IQ test I took when I was 15 - is about 130 points. When I was 17, I realized that there is something about the world that needs to be done with. I started to study, mainly math and physics. I was horrible at it - I had very big disadvantage because I missed basics and wasn't able to recognize it. Anyway, I tried (but, unfortunately, not as much as I had to) and reached so-so level and I got on the technical university. Here I tried really hard and I achieved relatively good results and got into the best maths-focused student group. I'm below-average in this group (about 30 students) and my results are satisfactory. I'm quite popular thanks to collaborating on some non-study events for my schoolmates. I also created a presentation for high school students about engineering and I distributed it among faculty workers and students, who are connected to propagation.
About 10 years I obtained ECDL and it started my curiosity about informatics. But nothing special - I was autodidact in HTML and "computer administration" for regular usage. I was also very interested in economy, as my father is working in this area. I actively did cross-country skiing and play on piano and trombone.
I have high charisma, authority and ability to organize people and some bigger events, which I was usually asked to prepare (the graduate prom, matriculation etc.). I have good reasoning skills and ability to negotiate even under heavy pressure and stress. People usually enjoy time with me and appreciates me for my honesty, empathy and "cold-think" reasoning solutions, which in most time shows there were the best possible. I'm in healthy relationship for two years. My family is good background for my activities and support me. They also support me financially. My expense per month is not more than 300 USD including accommodation with in an apartment (university students, two of them from my university and domain), food and social activities.
Currently, apart from my school activities, I'm also attending some kind of philosophy group every week, where we usually discuss some topic about epistemology, relationships, culture, religions etc., we read some philosophic works (Platon), deal with art (classical music or paintings) or we write some kind of voluntary essays. I'm really interested in discussions about these topics and I try to develop my reasoning skills as often I can. For example, now I contacted a priest from local temple with whom I want to discuss some religion based questions. I autodidact psychology (last book I've read was Kahneman: Think fast and slow), rationality (started to read LW sequences), and programming. I enjoy using open source software on my Archlinux laptop and now I dived into Python as a scripting language. I also develop some web for my mother using Django and I also signed for a statistical research task about datamining in Python (pandas, numpy, scikit-learn...) or R. In school I have courses of C++ also. I'm not the most talented or generally best mathematician or programmer, but I have quite good learning (and also teaching) skills.
I've chosen my "path" - I'd like to do what's right and true and seek for the truth whenever it is possible. I feel that I'm not getting everything (e.g. from my school) I need for changing the world to a better place. I could do more. I can't decide where to focus and how to divide my attention and possibilities. Should I do aggressive autodidact of sequences? Should I focus on maths and algorithms or biases? Should I try to develop my social skills?
And the second question is simple:" Are there any Czechs who are interested in meetups in PRAGUE?"
Thank you
Hi,
I'm a philosopher (postdoc) at the London School of Economics who recently discovered Less Wrong. I am now reading through lots of old posts, especially Yudkowsky's and lukeprog's philosophy-related material, which I find very interesting.
I think lukeprog is right when he points out that the general thrust of Yudkowsky's philosophy belongs to a naturalistic tradition often associated with Quine's name. In general, I think it would be useful to situate Yudkowsky's ideas visavi the philosophical tradition. I hope to be able to contributre something here at some point (though I should point out that I'm not an expert in the history of philosophy).
lukeprog argues for these ideas in two excellent articles:
http://lesswrong.com/lw/4vr/less_wrong_rationality_and_mainstream_philosophy/ http://lesswrong.com/lw/4zs/philosophy_a_diseased_discipline/
I agree with most of what is said there, and am myself very critical of mainstream analytical philosophy. It also seems to me that the overall program advocated here - to let psychological knowledge permeate all philosophical arguments in a very radical way - is very promising. Though there are philosophers who make use of psychology, do experiments, etc., few let it influence their thinking as radically as it is done here.
The site seems very interesting in other respects as well. I am presently reading up on cognitive science (I found this site after googling on Stanovichs Rationality and the Reflective Mind, which I now have read) and am grateful for the info on this subject gathered on Less Wrong.
Greetings.
I'm a long-time singularitarian and (intermediate) rationalist looking be a part of the conversation again. By day I am an English teacher in a suburban American high school. My students have been known to Google me. Rather than self-censor I am using a pseudonym so that I will feel free to share my (anonymized) experiences as a rationalist high school teacher.
I internet-know a number of you in this community from early years of the Singularity Institute. I fleetingly met at a few in person once, perhaps. I used to write on singularity-related issues, and was a proud "sniper" of the SL4 mailing list for a time. For the last 6-7 years I've mostly dropped off the radar by letting "life" issues consume me, though I have continued to follow the work of the key actors from afar with interest. I allow myself some pride for any small positive impact I might have once had during a time of great leverage for donors and activists, while recognizing that far too much remains undone. (If you would like to confirm your suspicions of my identity, I would love to hear from you with a PM. I just don't want Google searches of my real name pulling up my LW activity.)
High school teaching has been a taxing path, along with parenting, and it has been all too easy to use these as excuses to neglect my more challenging (yet rewarding) interests. I let my inaction and guilt reinforce each other until I woke up one day, read HPMoR, and realized I had long-ago regressed into an NPC.
Screw that.
Other background tidbits: I'm one of those atheist ex-mormons that seem so plentiful on this page (since 2000ish). I'm a self-taught "hedge coder" who has successfully used inelegant-but-effective programming in the service of my career. I feel effective in public education, which is not without its rewards. But on some important levels teaching at an American public high school is also a bit like working security at Azkaban, and I'm not sure how many more years I'll be able to keep my patronus going.
I've been using GTD methodologies for the last eight years or so, which has been great for letting me keep my mind clear to work on important tasks at hand; however, my dearest personal goals (which involve writing, both fiction and non) live among some powerful Ugh Fields. If I had been reading LW more closely, I probably would've discovered the Pomodoro method a lot sooner. This is helping.
My thanks to all who share their insights and experiences on this forum.
Hi everybody,
My name is Eric, and I'm currently finishing up my last semester of undergraduate study and applying to Ph.D. programs in cognitive psychology/cognitive neuroscience. I recently became interested in the predictive power offered by formal rational models of behavior after working in Paul Glimcher's lab this past summer at NYU, where I conducted research on matching behavior is rhesus monkeys. I stumbled upon Less Wrong while browsing the internet for behavioral economics blogs. After reading a couple of posts, I decided to join.
Some sample topics that I like reading about and discussing include intertemporal choice, risk preferences, strategic behavior in the context of games, reinforcement learning, and the evolution of cooperation. I look forward to chatting with some of you!
Hello, LW,
One of my names is holist. I am 45. Self-employed family man, 6 kids, 2 dogs, 1 cat. Originally a philosopher (BA+MA from Sussex, UK), but I've been a translator for 19 years now... it is wearing thin. Music and art are also important parts of my life (have sold music, musically directed a small circus, have exhibited pictures), and recently, with dictatorship being established here in Hungary, politics seems increasingly urgent, too. I dabble in psychotherapy and call myself a Discordian. Recently, I started thinking about doing a PhD somewhere. My topic is very general: what has caused the increasingly hostile relationship between individual and culture, and what are the remedies available? My window of opportunity is a few years away: I am intermittently thinking about possible supervisors. I have a blog at holist.hu, some of it is in English and it has a lot of pictures. The discordians at PD kicked me out, I found the Secular Café to be indescribably boring, I am a keen MeFite, but I'd like something a little more discussioney. A friend pointed me at LW. I hope it works out.
For pointers, here's a slightly random list of some books that are very important to me: Fiction: Mason and Dixon by Thomas Pynchon, Ulysses by James Joyce, Karnevál by Béla Hamvas, Moby Dick by Herman Melville, The Diamong Age by Neal Stephenson. Non-fiction: The Continuum Concept by Jean Liedloff, The Facts of Life by R. D. Laing, Children of the Future by Wilhelm Reich, The Drama of the Gifted Child by Alice Miller, The Story of B by Daniel Quinn, Tools for Conviviality by Ivan Illich
I guess I'll start by lurking, but you never know :)
Hello, my name is Luke. I'm an urban planning graduate student at Cleveland State University, having completed an undergrad in philosophy at the University of New Hampshire a year ago. It was the coursework I did at that school which lead me to be interested in the nebulous and translucent topic of rationality, and I'm happy to see so many people involved and interested in the same conversations I'd spend hours having with classmates. Heck, the very question I was asking myself in something of an ontological sense--am I missing the trees for the forest--is what led me here, specifically to Eliezer's article on the fallacies of compression, which was somewhat helpful. Suffice to say, I tend to think I'm not missing the trees for the forest, and that in fact the original form of the idiom remains true for most other people, though thankfully, not many here.
I'm deeply interested in epistemology, metaphysics, aesthetics, and metaethics, all of which I attempt to approach in systemic ways. As for what led me to consider myself a rationalist in these endeavors...I'm not sure I do. In fact, I'm not sure anyone can or should think of themselves a rationalist, considering that basic beliefs, other than solipsism, are inductive and inferential, and thus fallible. We could argue in circles forever (as others have) what constitutes knowledge, but any definition seems, in my view, to be arbitrary and thus non-universal and therefore, again, fallible--even mathematical knowledge and formal logic.
Granted, I don't sit in a corner rocking back and forth sucking my thumb, driven mad by the uncertainty of it all, but I also operate with the knowledge that whatever I deem rational behavior and thought processes only seem rational because I've pre-decided what constitutes rational behavior (i.e., circularity, or coherentism at best...feeling like I'm writing a duplicate of a different post). Of course, all that seems like too easy an exit from a number of hard problems, so I keep reading to make sure that, in fact, I oughtn't be rocking back and forth in a corner sucking my thumb for the utility of it, turning into a kind of utility monster. An absurdist I remain, but one with a pretty strong intuitive consequentialist metaethical framework which allows me to find great joy in the topics covered on LW.
Hi, I have a site tech question. (Sorry if this is the wrong place to post that!—I couldn't find any other.)
I can't find a way to get email notifications of comment replies (i.e. when my inbox icon goes red). If there is one, how do I turn it on?
If there isn't one, is that a deliberate design feature, or a limitation of the software, or...?
Thanks (and thanks especially to whoever does the system maintenance here—it must be a big job.)
There's no way I know of to get email notifications, and I've looked enough that I'm pretty confident one doesn't exist.
No idea if it's a deliberate choice or a software limitation.
G'day
As you can probably guess, I'm Alex. I'm a high school student from Australia and have been disappointed with the education system here from quite some time.
I came to LW via HPMoR which was linked to me by a fellow member of the Aus IMO team. (I seriously doubt I'm the only (ex-)Olympian around here - seems just the sort of place that would attract them). I've spent the past few weeks reading the sequences by EY, as well as miscellaneous other stuff. Made a few (inconsequential) posts too.
I have very little in the way of controversial opinions to offer (relative to the demographics of this site) as just about all the unusual positions it takes I already agreed with (e.g. athiesm) or seemed pretty obvious to me after some thought (e.g. transhumanism). Maybe it's just hindsight bias.
I'm slightly disappointed with the ban on political discussion. I do agree that it should not be mentioned when not relevant but it seems a shame to waste this much rationality in one place by forbidding them to use it where it's most needed. A possible compromise would be to create a politics dicussion page to discuss pros and cons to particular ideologies. (If one already exists point me to it). A reason cited is that there are other sites to discuss politics - if any do so rationally I'd like to see them.
It is a relief to be somewhere where I don't have to constantly take into account inferential distance, and I shall try to make the most of this. I only resolve to write just that which has not been written.
Welcome!
There have been previous political threads, like here, here, or here. If you search "politics," you'll find quite a bit. Here was my response to the proposal that we have political discussion threads; basically, I think politics is a suboptimal way to spend your time. It might feel useful, but that doesn't mean it is useful. Here's Raemon's comment on the norm against discussing politics. Explicitly political discussion can be found on MoreRight, founded by posters active on LessWrong, as well as on other blogs. (MoreRight is part of 'neoreaction', which Yvain has recently criticized here, for example.)
I don't see what you mean by the 'pros and cons' of holding a particular ideology. Ideologies are, generally, value systems- they define what is a pro and what is a con.
I must add that not all political discussion is a mud-flinging match between the Cyans and the Magentas.
For example, the Public Choice theory is a bona fide intellectual topic, but it's also clearly political.
I would also argue that knowing things like the scope of NSA surveillance is actually useful.
I'm curious why you'd divert from the historically compelling example of the Blues and the Greens.
It's about politics, but the methodology is not political. The part of politics that's generally fun for people is putting forth an impassioned defense of some idea or policy. That's generally not useful on LessWrong unless it's about a site policy- and even then, the passion probably doesn't help.
Sure.
I strongly associate the Greens with, well, the Greens -- a set of political parties in Europe and the whole environmentalist movement.
Blue is a politically-associated color in the US as well.
True, but LW is VERY unrepresentative sample :-) and maybe we could do a bit better. You're right in that discussing the "pros and cons" of ideological positions is not a good idea, but putting "Warning: mindkill" signs around a huge area of reality and saying "we just don't go there" doesn't look appealing either.
Hello everyone,
My name is Mathias. I've been thinking about coming here for quite some while and I finally made the jump. I've been introduced to this website/community through Harry Potter and the Methods of Rationality and I've been quite active on it's subreddit (under the alias of Yxoque).
For a while now, I've been self-identifying as "aspiring rationalist" and I want to level up even further. One way I learn quickly is through conversation, so that's why I finally decided to come here. Also because I wanted to attend a meet-up, but if felt wrong to join one of a community I'm not a member of.
As for info on myself, I'm not sure how interesting that is. I've recently graduated as a Bachelor in Criminology at the University of Brussels and I'm currently looking for a suitable job. I still need to figure out my comparative advantage.
I'm also reading through a pdf of all the blog posts, currently I'm on the Meta-ethics sequence. In my free time I'm (slowly) working on annotating HPMOR and convincing people to write Rational!Animorphs fanfiction.
Hi there, I am Andrew, living in Hungary and studying to be an IT physicist some day in an utmost lazy way. I've just recently discovered this site and so far I can't really believe what I'm seeing. I have been thinking myself before about a website whose main purpose is basically making its users wiser and/or more rational. - about which my main question later will be put that if u could answer would be great; also, excuse my English, it's not my native language.
I believe rationality can be expressed as the set of "right" algorithms in a given context. The rightfulness of algorithms in this case is dependent on the goal which the context defines.
My question is that are the majority of people here generally conscious of the significance of finding the "fancy word for global, important and unique" goal, or as I shall put "pure wisdom" OR do they - or you - here just feel the necessity to lay down the healthy plain soil for building the "tower of wisdom" and care less about the actual adventure of building it.
My main point is that even tough our best tool to gain wisdom is through rational thinking - and a little extra - , how we react and how far we go on this road as rational beings is the function of each one of our unique perspective of life. Are there individuals here whose (main) point in life is to possess the right perspective of life?
For example if we accepted all sciences and set aside our big hopes like the one for afterlife, we may come to the conclusion that any kind of necessity other than our primal ones that have evolved in our ancestors over the millions of years are flawed or delusional. Not taking the alternative of nihilism -its reflection to humans- or physicalism as its sort of parallel philosophy into account is neither irrational nor rational, but unwise I believe even though the philosophy itself doesn't bring or promise much materialistic profit.
Here, my goals in the first place would - I mean: will - be to observe, learn and then adjust my knowledge and beliefs since inevitably I'm gonna bump into other people's belief systems which are built on a rational, in its broadest sense, and healthy ground. Looking forward to it. And looking forward getting to know other people's struggles achieving similar goals and their personalities, have a nice day :)
There are individuals here whose main point in life is to ensure that the first superhuman artificial intelligence possesses the right perspective of life. Is that close enough? :-)
One thing that distinguishes LW rationalism from other historic rationalist movements is a strong interest in transhumanism and the singularity. Historically, wisdom has usually been about accepting the limited and disappointing nature of life, whether your attitude is stoic, epicurean, or bodhisattvic. But the cultural DNA of LW includes nanotechnology, space travel, physical immortality, mind uploading, and computer-brains the size of whole solar systems. There is a strong tendency to think that rationality consists of remaking nature in the image of your goals, rather than vice versa, and that the struggle is to determine which values will shape the universe. This is a level of Promethean ambition more common in apocalyptic movements than in rationalist movements.
This aspect of LW comes and goes in prominence.
Hello!
I'm Jennifer; I'm currently a graduate student in medieval literature and a working actor. Thanks to homeschooling, though, I do have a solid background and abiding interest in quantum physics/pure mathematics/statistics/etc., and 'aspiring rationalist' is probably the best description I can provide! I found the site through HPMoR.
Current personal projects: learning German and Mandarin, since I already have French/Latin/Spanish/Old English/Old Norse taken care of, and much as I personally enjoy studying historical linguistics and old dead languages, knowing Mandarin would be much more practical (in terms of being able to communicate with the greatest number of people when travelling, doing business, reading articles, etc.)
Hey, another homeschooled person! There seem to be a lot of us here. How was your experience? Mine was the crazy religious type, but I still consider it to have been an overall good thing for my development relative to other feasible options.
Me three-- I thought I was the only one, where are we all hiding? :)
My experience was, overall, excellent - although my parents are definitely highly religious. (To be more precise, my father is a pastor, so biology class certainly contained some outdated ideas!) However, I'm in complete agreement - relative to any other possible options, I don't think I could have gotten a better education (or preparation for postsecondary/graduate studies) any other way.
Yeah, I got taught young earth creationism instead of evolution. But despite this, i think I was better prepared academically than most of my peers.
I am a celibate pedophile. That means I feel a sexual and romantic attraction to young girls (3-12) but have never acted on that attraction and never will. In some forums, this revelation causes strong negative reactions and a movement to have me banned. I hope that's not true here.
From a brief search, I see that someone raised the topic of non-celibate pedophilia, and it was accepted for discussion. http://lesswrong.com/lw/67h/the_phobia_or_the_trauma_the_probem_of_the_chcken/ Hopefully celibate pedophilia is less controversial.
I have developed views on the subject, though I like to think that I can be persuaded to change them, and one thing I hope to get here on LessWrong is reason-based challenges. Hopefully others will find the topics informative as well. In the absence of advice on a better way to proceed, I plan to make posts in Discussion now and then on various aspects of the topic.
I'm in my 50s and am impressed with the LessWrong approach in general and have done my best to follow some of its precepts for years. I have read most of the core sequences.
Salutations!
My name is Aaron. I'm a college junior on the tail end of the cycle of Bar Mitzvah to New Atheist to info-omnivorous psychology geek to attempted systems thinker. Prospective Psychology/Cognitive Science major at Yale, very interested in meeting other rationalists in the New Haven area. I'm on the board of the Yale Humanist Community, I'm a research assistant in a neuroscience lab, and I do a lot of writing.
Big problems I've been thinking a lot about: Why are most people wildly irrational in the amount of time they're willing to devote to information search (that is, reducing uncertainty around uncertain decisions)? How can humanists and rationalists build a compelling community that serves adults of all ages as well as children? What sorts of media tend to encourage the "shift" from bad thinking to good thinking, and/or passive to active thinking (NPC vs. hero mindset, sort of--this one is complicated), and how can we get that media in the hands of more people?
I read HPMoR without really noticing Less Wrong, but have been linked to a few posts over the years. Last spring, I found "Privileging the Question", which rang so true that I went on to read the Sequences and much of the rest. I was never very certain in my philosophy before finding the site, but now I'm pretty sure I at least know how to think about philosophy, which is nice.
The next few years hopefully involve me getting a job out of college that will allow me to build savings while donating plenty, while aligning me to take a position in some high-upside sector of tech or in the rationalist arena, but a lot of people say that, and I'm very unsure about what will actually happen if I flunk my case interviews. Still, the future will be better than the past regardless, and that thought keeps me going (as does knowing how many people are out there working to avoid future-is-worse-than-past scenarios).
Hi, I am Olga, female, 40, programmer, mother of two. Got here from HPMoR. Can not as yet define myself as a rationalist, but I am working on it. Some rationality questions, used in real life conversations, have helped me to tackle some personal and even family issues. It felt great. In my "grown-up" role I am deeply concerned to bring up my kids with their thoughts process as undamaged as I possibly can and maybe even to balance some system-taught stupidity. I am at the start of my reading list on the matter, including LW sequences.
Welcome!
Many people here call themselves aspiring rationalists.
Hi! Everyone below are superbly impressive! I'm a physicist, in my second year of teaching English and that's as much rationality as I can provide at the moment. Looking to relocate to China in an effort to be superhuman. Would really appreciate a few pointers on teaching institutions to avoid/ embrace.
Excellent reading here, thanks! Nas
Hi. I've been a distant LW lurker for a while now; I first encountered the Sequences sometime around 2009, and have been an avid HP:MOR fan since mid-2011.
I work in computer security with a fair bit of software verification as flavoring, so the AI confinement problem is of interest to me, particularly in light of recent stunts like arbitrary computation in zero CPU instructions via creative abuse of the MMU trap handler. I'm also interested in applying instrumental rationality to improve the quality and utility of my research in general. I flirt with some other topics as well, including capability security, societal iterated game theory, trust (e.g., PKI), and machine learning; a meta-goal is to figure out how to organize my time so that I can do more applied work in these areas.
Apart from that, lately I've become disillusioned with my usual social media circles, in part due to a perceived* uptick in terrible epistemology and in part due to facing the fact that I use them as procrastination tools. I struggle with akrasia, and am experiencing less of it since quitting my previous haunts cold turkey, but things could still be better and I hope to improve them by seeking out positive influences here.
*I haven't measured this. It's entirely possible I've become more sensitive to bad epistemology, or some other influence is lowering my tolerance to bad epistemology.
Hey, my name is Roman. You can read my detailed bio here, as well as some research papers I published on the topics of AI and security. I decided to attend a local LW meet up and it made sense to at least register on the site. My short term goal is to find some people in my geographic area (Louisville, KY, USA) to befriend.
Hi Roman. Would you mind answering a few more questions that I have after reading your interview with Luke? Carl Shulman and Nick Bostrom have a paper coming out arguing that embryo selection can eventually (or maybe even quickly) lead to IQ gains of 100 points or more. Do you think Friendly AI will still be an unsolvable problem for IQ 250 humans? More generally, do you see any viable path to a future better than technological stagnation short of autonomous AGI? What about, for example, mind uploading followed by careful recursive upgrading of intelligence?
Hey Wei, great question! Agents (augmented humans) with IQ of 250 would be superintelligent with respect to our current position on the intelligence curve and would be just as dangerous to us, unaugment humans, as any sort of artificial superintelligence. They would not be guaranteed to be Friendly by design and would be as foreign to us in their desires as most of us are from severely mentally retarded persons. For most of us (sadly?) such people are something to try and fix via science not something for whom we want to fulfill their wishes. In other words, I don’t think you can rely on unverified (for safety) agent (event with higher intelligence) to make sure that other agents with higher intelligence are designed to be human-safe. All the examples you give start by replacing humanity with something not-human (uploads, augments) and proceed to ask the question of how to safe humanity. At that point you already lost humanity by definition. I am not saying that is not going to happen, it probably will. Most likely we will see something predicted by Kurzweil (merger of machines and people).
I think if I became an upload (assuming it's a high fidelity emulation) I'd still want roughly the same things that I want now. Someone who is currently altruistic towards humanity should probably still be altruistic towards humanity after becoming an upload. I don't understand why you say "At that point you already lost humanity by definition".
Wei, the question here is would rather than should, no? It's quite possible that the altruism that I endorse as a part of me is related to my brain's empathy module, much of which might be broken if I see cannot relate to other humans. There are of course good fictional examples of this, e.g. Ted Chiang's "Understand" - http://www.infinityplus.co.uk/stories/under.htm and, ahem, Watchmen's Dr. Manhattan.
Logical fallacy: Generalization from fictional evidence.
A high-fidelity upload who was previously altruistic toward humanity would still be altruistic during the first minute after awakening; their environment would not cause this to change unless the same sensory experiences would have caused their previous self to change.
If you start doing code modification, of course, some but not all bets are off.
Well, I did put a disclaimer by using the standard terminology :) Fiction is good for suggesting possibilities, you cannot derive evidence from it of course.
I agree on the first-minute point, but do not see why it's relevant, because there is the 999999th minute by which value drift will take over (if altruism is strongly related to empathy). I guess upon waking up I'd make value preservation my first order of business, but since an upload is still evolution's spaghetti code it might be a race against time.
+1 for linking to Understand ; I remembered reading the story long ago, but I forgot the link. Thanks for reminding me !
We can talk about what high fidelity emulation includes. Will it be just your mind? Or will it be Mind + Body + Environment? In the most common case (with an absent body) most typically human feelings (hungry, thirsty, tired, etc.) will not be preserved creating a new type of an agent. People are mostly defined by their physiological needs (think of Maslow’s pyramid). An entity with no such needs (or with such needs satisfied by virtual/simulated abandoned resources) will not be human and will not want the same things as a human. Someone who is no longer subject to human weaknesses or relatively limited intelligence may lose all allegiances to humanity since they would no longer be a part of it. So I guess I define “humanity” as comprised on standard/unaltered humans. Anything superior is no longer a human to me, just like we are not first and foremost Neanderthals and only after homo sapiens.
Insofar as Maslow's pyramid accurately models human psychology (a point of which I have my doubts), I don't think the majority of people you're likely to be speaking to on the Internet are defined in terms of their low-level physiological needs. Food, shelter, physical security -- you might have fears of being deprived of these, or even might have experienced temporary deprivation of one or more (say, if you've experienced domestic violence, or fought in a war) but in the long run they're not likely to dominate your goals in the way they might for, say, a Clovis-era Alaskan hunter. We treat cases where they do as abnormal, and put a lot of money into therapy for them.
If we treat a modern, first-world, middle-class college student with no history of domestic or environmental violence as psychologically human, then, I don't see any reason why we shouldn't extend the same courtesy to an otherwise humanlike emulation whose simulated physiological needs are satisfied as a function of the emulation process.
I don’t know you, but for me only a few hours a day is devoted to thinking or other non-physiological pursuits, the rest goes to sleeping, eating, drinking, Drinking, sex, physical exercise, etc. My goals are dominated by the need to acquire resources to support physiological needs of me and my family. You can extend any courtesy you want to anyone you want but you (human body) and a computer program (software) don’t have much in common as far as being from the same group is concerned. Software is not humanity; at best it is a partial simulation of one aspect of one person.
It seems to me that there are a couple of things going on here. I spend a reasonable amount of time (probably a couple of hours of conscious effort each day; I'm not sure how significant I want to call sleep) meeting immediate physical needs, but those don't factor much into my self-image or my long-term goals; I might spend an hour each day making and eating meals, but ensuring this isn't a matter of long-term planning nor a cherished marker of personhood for me. Looked at another way, there are people that can't eat or excrete normally because of one medical condition or another, but I don't see them as proportionally less human.
I do spend a lot of time gaining access to abstract resources that ultimately secure my physiological satisfaction, on the other hand, and that is tied closely into my self-image, but it's so far removed from its ultimate goal that I don't feel that cutting out, say, apartment rental and replacing it with a proportional bill for Amazon AWS cycles would have much effect on my thoughts or actions further up the chain, assuming my mental and emotional machinery remains otherwise constant. I simply don't think about the low-level logistics that much; it's not my job. And I'm a financially independent adult; I'd expect the college student in the grandparent to be thinking about them in the most abstract possible way, if at all.
Have you ever had the unfortunate experience of hanging out with really boring people; say, at a party ? The kind of people whose conversations are so vapid and repetitive that you can practically predict them verbatim in your head ? Were you ever tempted to make your excuses and duck out early ?
Now imagine that it's not a party, but the entire world; and you can't leave, because it's everywhere. Would you still "feel altruistic toward humanity" at that point ?
It's easy to conflate uploads and augments, here, so let me try to be specific (though I am not Wei Dai and do not in any way speak for them).
I experience myself as preferring that people not suffer, for example, even if they are really boring people or otherwise not my cup of tea to socialize with. I can't see why that experience would change upon a substrate change, such as uploading. Basically the same thing goes for the other values/preferences I experience.
OTOH, I don't expect the values/preferences I experience to remain constant under intelligence augmentation, whatever the mechanism. But that's kind of true across the board. If you did some coherently specifiable thing that approximates the colloquial meaning of "doubled my intelligence" overnight, I suspect that within a few hours I would find myself experiencing a radically different (from my current perspective) set of values/preferences.
If instead of "doubling" you "multiplied by 10" I expect that within a few hours I would find myself experiencing an incomprehensible (from my current perspective) set of values/preferences.
I've heard repeatedly that the correlation between IQ and achievement after about 120 (z = 1.33) is pretty weak, possibly even with diminishing returns up at the very top. Is moving to 250 (z = 10) passing a sort of threshold of intelligence at some point where this trend reverses? Or is the idea that IQ stops strongly predicting achievement above 120 wrong?
This is something I've been curious about for a while, so I would really appreciate your help clearing the issue up a bit.
First, IQ tests don't go to 250 :-) Generally speaking standard IQ tests have poor resolution in the tails -- they cannot reliably identify whether you have the IQ of, say, 170 or 190. At some point all you can say is something along the lines of "this person is in the top 0.1% of people we have tested" and leave it at that.
Second, "achievement" is a very fuzzy word. People mean very different things by it. And other than by money it's hard to measure.
In agreement with Vaniver's comment, there is evidence that differences in IQ well above 120 are predictive of success, especially in science. For example:
IQs of a sample of eminent scientists were much higher than the average for science PhDs (~160 vs ~130)
Among those who take the SAT at age 13, scorers in the top .1% end up outperforming the top 1% in terms of patents and scientific publications produced as adults
I don't think I have good information on whether these returns are diminishing, but we can at least say that they are not vanishing. There doesn't seem to be any point beyond which the correlation disappears.
I just read the "IQ's of eminent scientists" and realized I really need to get my IQ tested.
I've been relying on my younger brother's test (with the knowledge that older brothers tend to do slightly better but usually within an sd) to guesstimate my own IQ but a) it was probably a capped score like Feynman's since he took it in middle school and b) I have to know if there's a 95% chance of failure going into my field. I'd like to think I'm smart enough to be prominent, but it's irrational not to check first.
Thanks for the information; you might have just saved me a lot of trouble down the line, one way or the other.
I'd be very careful generalizing from that study to the practice of science today. Science in the 1950s was VERY different, the length of time to the phd was shorter, postdocs were very rare, and almost everyone stepped into a research faculty position almost immediately.
In today's world, staying in science is much harder- there are lots of grad students competing for many postdocs competing for few permanent science positions. In today's world, things like conscientiousness, organization skills,etc (grant writing is now a huge part of the job) play a much larger role in eventually landing a job in the past, and luck is a much bigger driver (whether a given avenue of exploration pays off requires a lot of luck. Selecting people whose experiments ALWAYS work is just grabbing people who have been both good AND lucky). It would surprise me if the worsening science career hasn't changed the make up of an 'eminent scientist'.
At the same time, all of those points except the luck one could be presented as evidence that the IQ required to be eminent has increased rather than the converse. Grant writing and schmoozing are at least partially a function of verbal IQ, IQ in general strongly predicts academic success in grad school, and competition tends to winnow out the poor performers a lot more than the strong.
Not that I really disagree, I just don't see it as particularly persuasive.
That's just one of the unavoidable frustrations of human nature though; an experiment which dis-confirms it's hypothesis worked perfectly, it just isn't human nature to notice negatives.
I disagree for several reasons. Mostly, conscientiousness, conformity,etc are personality traits that aren't strongly correlated with IQ (conscientiousness may even be slightly negatively correlated).
Would it surprise you to know that the most highly regarded grad students in my physics program all left physics? They had a great deal of success before and in grad school (I went to a top 5 program) , but left because they didn't want to deal with the administrative/grant stuff, and because they didn't want to spend years at low pay.
I'd argue that successful career in science is selecting for some threshhold IQ and then much more strongly for a personality type.
Are you American? If you've taken the SAT, you can get a pretty good estimate of your IQ here.
Mensa apparently doesn't consider the SAT to have a high-enough g loading to be useful as an intelligence test after 1994. Although the website's figure are certainly encouraging, it's probably best to take them with a bit of salt.
True, but note that, in contrast with Mensa, the Triple Nine Society continued to accept scores on tests taken up through 2005, though with a higher cutoff (of 1520) than on pre-1995 tests (1450).
Also, SAT scores in 2004 were found to have a correlation of about .8 with a battery of IQ tests, which I believe is on par with the correlations IQ tests have with each other. So the SAT really does seem to be an IQ test (and an extremely well-normed one at that if you consider their sample size, though perhaps not as highly g-loaded as the best, like Raven's).
But yeah, if you want to have high confidence in a score, probably taking additional tests would be the best bet. Here's a list of high-ceiling tests, though I don't know if any of them are particularly well-normed or validated.
Most IQ tests are not very well calibrated above 120ish, because the number of people in the reference sample that scored much higher is rather low. It's also the case that achievement is a function of several different factors, which will probably become the limiting factor for most people at IQs higher than 120. That said, it does seem that in physics, first-tier physicists score better on cognitive tests than second-tier physicists, which suggests that additional IQ is still useful for achievement in the most cognitively demanding fields. It seems likely that augmented humans who do several times better than current humans on cognitive tests will also be able to achieve several times as much in cognitively demanding fields.
Is this what you intended to say? "Diminishing returns" seems to apply at the bottom the scale you mention. You've already selected the part where returns have started diminishing. Sometimes it is claimed that that at the extreme top the returns are negative. Is that what you mean?
Yeah, that's just me trying to do everything in one draft. Editing really is the better part of clear writing.
I meant something along the lines of "I've heard it has diminishing returns and potentially [, probably due to how it affects metabolic needs and rate of maturation] even negative returns at the high end."
Nice to see more AI experts here.
Note also that Roman co-authored 3 of the papers on MIRI's publications page.
Hi!
I’ve been interested in how to think well since early childhood. When I was about ten, I read a book about cybernetics. (This was in the Oligocene, when “cybernetics” had only recently gone extinct.) It gave simple introductions to probability theory, game theory, information theory, boolean switching logic, control theory, and neural networks. This was definitely the coolest stuff ever.
I went on to MIT, and got an undergraduate degree in math, specializing in mathematical logic and the theory of computation—fields that grew out of philosophical investigations of rationality.
Then I did a PhD at the MIT AI Lab, continuing my interest in what thinking is. My work there seems to have been turned into a surrealistic novel by Ken Wilber, a woo-ish pop philosopher. Along the way, I studied a variety of other fields that give diverse insights into thinking, ranging from developmental psychology to ethnomethodology to existential phenomenology.
I became aware of LW gradually over the past few years, mainly through mentions by people I follow on Twitter. As a lurker, there’s a lot about the LW community I’ve loved. On the other hand, I think some fundamental, generally-accepted ideas here are limited and misleading. I began considering writing about that recently, and posted some musings about whether and how it might be useful to address these misconceptions. (This was perhaps ruder than it ought to have been.) It prompted a reply post from Yvain, and much discussion on both his site and mine.
I followed that up with a more constructive post on aspects of how to think well that LW generally overlooks. In comments on that post, several frequent LW contributors encouraged me to re-post that material here. I may yet do that!
For now, though, I’ve started a sequence of LW articles on the difference between uncertainty and probability. Missing this distinction seems to underlie many of the ways I find LW thinking limited. Currently my outline for the sequence has seven articles, covering technical explanations of this difference, with various illustrations; the consequences of overlooking the distinction; and ways of dealing with uncertainty when probability theory is unhelpful.
(Kaj Sotala has suggested that I ask for upvotes on this self-introduction, so I can accumulate enough karma to move the articles from Discussion to Main. I wouldn’t have thought to ask that myself, but he seems to know what he’s doing here! :-)
O&BTW, I also write about contemporary trends in Buddhism, on several web sites, including a serial, philosophical, tantric Buddhist vampire romance novel.
Hello then.
I am a political science and international development undergrad student, residing mainly in Vienna, Austria. The story of how I came here is probably a rather common one - it started on TvTropes, where I am an on-off forum contributor and editor, where I first heard of Harry Potter and the Methods of Rationality. After reading it, I decided to look further into the rationalist community, partly because of my interest for philosophy, ethics, politics and debating, but also hoping to find novel, intelligent and helpful approaches to several key questions I have been struggling with for a while.
I hope to be able to contribute soon in an efficient and constructive way - even though I have a lot of catching up to do. There's probably going to be a bit of an archive panic moment, what with having the sequences to finish. Lots to learn, and eager to do so. See you around!
Which are those?
Hi everyone, my name is Sara!
I am 21, live in Switzerland and study psychology. I am fascinated with the field of rationality and therefore wrote my Bachelor thesis on why and how critical thinking should be taught in schools. I started out with the plan to get my degree in clinical- and neuropsychology but will now change to developmental psychology for I was able to fascinate my supervising tutor and secure his full support. This will allow me to base my Master project on the development and enhancing of critical thinking and rationality, too. Do you have any recommendations?
After my Master's degree I still intend on getting an education as therapist (money reasons) or going into research (pushing the experimental research on rationality) and on giving a lot of money to the most effective charities around. I wonder if as therapist it would be smarter to concentrate on children or adults; both fields will be open for me after my university education (which will take me about 2.5-3 more years). I speak German, Swiss German, Italian, French and English (and understand some more languages), which will give me some freedom in the choice where to actually work in future.
...but I'm not only looking for advice here. I'm (mainly) interested in educating myself (and possibly other people around me). In fact, I am part of a Swiss group that translates less wrong articles into German, making the content available for more people in our surroundings (Switzerland, Germany, Austria).
I've learned a lot from this community and it has strongly shaped who I have become. There's no way I'd want to go back to my even more biased past self :)
Indeed, I am looking forward to learning more!
I'm a Swiss medical student. I've read HPMoR and a large part of the core sequences. I've attended LW meetups in several US cities and met quite a few of you in the Bay Area and/or at the Effective Altruism Summit. I've interned for Leverage Research. I co-founded giordano-bruno-stiftung.ch (outreach organisation with German translations of some LessWrong blog posts, and other posts about rationality). Looking forward to participating in the comment section more often.
I am a maximum-security ex-con who studied and used logic for pro se, civil-rights lawsuits. (The importance of being a maximum-security ex-con is that I was stubborn iconoclast who learned and used logic in all seriousness.) Logic helped me identify the weak links in my opponent's arguments and to avoid weak links in my own arguments, and logic helped my organize my writing and evidence. I also studied and learned to use “The Option Process” for eliminating my negative emotions and to understand other people's negative emotions. The core truth of “The Option Process” is that we choose to have negative emotions for reasons, not randomly, and not even necessarily. So, our rationality is very much a part of our emotions, and, as such, good reasoning can utterly remove negative emotions at the core of their raison d'être. However, some of my emotional and intellectual challenges have resisted solutions via logic and “The Option Process.” For example, I could not figure out how to stay objective and to behave objectively while trying to gamble for profit (not for fun). So, I began reading widely about self-control, discipline, integrity, neuroeconomics, etc. And, in the process, I found this LessWrong website.
I have only recently identified what may be at the root of my problem with gambling and why it resists both logic and “The Option Process.” Freud called it “childhood megalomania.” In our early years, whenever we cried and sniveled, the universe of Mom and Dad and others rushed to our needs. That inner baby rarely grows up well in any of us, and we still whine, snivel, and howl at the universe when things don't go our way, and we can get down right obstinate about doing so until the universe listens! The universe, in turn, responds favorably often enough to keep our inner babies convinced of our magic, temper-tantrum powers over reality.
I figured out that when I get frustrated, afraid, and challenged by the difficulties of gambling, I would rather feel safe, powerful and warm, and so I often lapse into an obstinate insistence on continuing to gamble because I want to believe and feel that I can successfully gamble whenever I want, even during objectively bad, fear-inducing, and frustrating conditions.
The universe has not been kind in that regard, but with my recent insight, I at least hope that my inner baby has grown one year older. The rest of the problem, the frustration and fear, will easily fall prey to the power of logic and “The Option Process.”
I'm Pasha, a financial journalist based in Tokyo.
I recently found out about this blog from this post on The View From Hell: http://goo.gl/DCNX4U
A few years in a school specialized in math and physics in the former Soviet Union have convinced me to seek my fortunes in liberal arts. (It's those kids in my class who would yell out an answer to a physics problem even before the teacher has finished reading the question.)
Covering the semiconductor industry here in Japan has sparked a renewed appreciation of the scientific method and revived my interest in rationality, math and computation. ... One thing leads to another and here I am ~
Hello, I am a 46 yr old software developer from Australia with a keen interest in Artificial Intelligence.
I don’t have any formal qualifications, which is a shame as my ideal life would be to do full time research in AI - without a PhD I realise this won’t happen, so I am learning as much as I can through books, practice and various online courses.
I came across this site today from a link via MIRI and feel like I have struck gold - the articles, sequences and discussions here are very well written, interesting and thoughtful.
My current goals are to build a framework that would allow a machine to manage its information (goals, tasks, raw data, external biases, weightings, and eventually its “knowledge”). As I understand it the last bit hasn’t been solved yet as it implies the machine needs a consciousness, but I am having fun playing around with it.
Hey, I'm dirtfruit.
I've lurked here for quite a while now. LessWrong is one of the most interesting internet communities I've observed, and I'd like to begin involving myself more actively. I've been to one meetup, in NYC, a few months ago, which was nice. I've read most of the sequences (I think I've read all of them at least once, but I haven't looked hard enough to be super-confident saying that). HPMOR is cool, I enjoyed reading it and continue to check for updates. I've tried to read most of what Eliezer has written, but gave up early on anything extremely technical, as I don't have the background for it. EY seems like a righteous dude to me. I dig his cause, and would like to make myself available to help, in what ways I can.
I'm currently 21 years old. I was born and raised on the west coast of the united states, and am now attending a college on the east coast studying fine art, with a concentration in drawing. I've always read a lot. When I was young; analog fiction, mostly. Now I most often find myself reading nonfiction online .
I'd like to find ways for artists (specifically me, but also other interested artists(to a lesser degree)) to be useful to the general cause of rationality; raising waterlines and whatnot. I believe there exists a general feeling among lesswrong users that artists can be fun, but are not very instrumentally useful to their particular cause. If this belief is misplaced, I'd be overjoyed to adjust it properly. I'm obviously biased, but I believe this feeling to be more than a few shades off from correct. Pictorial communication can be super intuitive. It can communicate very quickly relative to the written word, can be very memorable, and is capable of transcending many written/spoken language barriers. It's main downsides include: time-expense (drawing a picture generally takes longer than describing something verbally(spoken or written)); and scarcity of expertise - drawing and painting's difficulty curves seems roughly similar to that of writing, but they are practiced far less often than writing, and (nowadays, in the fine art world at least) held to very different standards. Experts in visual communication should be very instrumentally useful, for clarifying concepts not well suited to words, and also for attracting/aiding/communicating with those beyond the reach of literacy. I'm not claiming expertise (I'm still building my skills as a student), but at the very least I have some experience in crafting understandable, detailed pictures to something of a high standard. I'm also somewhat talented with words; integrating textual communication with visual communication (and visa versa) is something I'm sensitive to and interested in.
I also just really like the spirit and conventions of debate here, and would very much like to hear any and all thoughts about what I just wrote. :D thanks!
(also I think we need a new welcome thread? either that or I failed to find the proper one. This thread has far exceeded 500 posts...)
So: Here goes. I'm dipping my toe into this gigantic and somewhat scary pool/lake(/ocean?).
Here's the deal: I'm a recovering irrationalic. Not an irrationalist; I've never believed in anything but rationalism (in the sense it's used here, but that's another discussion), formally. But my behaviors and attitudes have been stuck in an irrational quagmire for years. Perhaps decades, depending on exactly how you're measuring. So I use "irrationalic" in the sense of "alcoholic"; someone who self-identifies as "alcoholic" is very unlikely to extol the virtues of alcohol, but nonetheless has a hard time staying away from the stuff.
And, like many alcoholics, I have a gut feeling that going "cold turkey" is a very bad idea. Not, in this case, in the sense that I want to continue being specifically irrational to some degree or another, but in that I am extremely wary of diving into the list of readings and immersing myself in rationalist literature and ideology (if that is the correct word) at this point. I have a feeling that I need to work some things out slowly, and I have learned from long and painful experience that my gut is always right on this particular kind of issue.
This does not mean that linking to suggested resources is in any way not okay, just that I'm going to take my time about reading them, and I suppose I'm making a weak (in a technical sense) request to be gentle at first. Yes, in principle, all of my premises are questionable; that's what rationalism means (in part). But...think about it as if you had a new, half-developed idea. If you tell it to people who tear it apart, that can kill it. That's kind of how I feel now. I'm feeling out this new(ish) way of being, and I don't feel like being pushed just yet (which people who know me might find quite rich; I'm a champion arguer).
Yes, this is personal, more personal than I am at all comfortable being in public. But if this community is anything like I imagine it to be (not that I don't have experience with foiled expectations!), I figure I'll probably end up divulging a lot more personal stuff anyway.
I honestly feel as if I'm walking into church for the first time in decades.
So why am I here then? Well, I was updating my long-dormant blog by fixing dead links &c, and in doing so, discovered to my joy that Memepool was no longer dead. There, I found a link to HPMOR. Reading this over the next several days contributed to my reawakening, along with other, more personal happenings. This is a journey of recovery I've been on for, depending on how you count, three to six years, but HPMOR certainly gave a significant boost to the process, and today (also for personal reasons) I feel that I've crossed a threshold, and feel comfortable "walking into church" again.
Alright, I'll anticipate the first question: "What are you talking about? Irrationality is an extremely broad label." Well, I'm not going to go into to too terribly much detail just now, but let's say that the revelation or step forward that occurred today was realizing that the extremely common belief that other people can make you morally wrong by their judgement is unequivocally false. This (that this premise is false) is what I strongly believed growing up, but...well, perhaps "strongly" is the wrong word. I had been raised in an environment that very much held that the opposite was true, that other people's opinion of you was crucial to your rightness, morality and worth as a human being. Nobody ever said it that way, of course, and would probably deny it if put that way, but that is nonetheless how most people believe. However, in my case it was so blatant that it was fairly easy to see how ridiculous it was. Nonetheless, as reasonable as my rational constructions seemed to me, there was really no way I could be certain that I was right and others were wrong, so I held a back-of-my-head belief, borne of the experience of being repeatedly mistaken that every inquisitive child experiences, that I would someday mature and come to realize I had been wrong all along.
Well, that happened. Sort of. Events in my life picked at that point of uncertainty, and I gave up my visceral devotion to rationality and personal responsibility, which led slowly down into an awful abyss that I'm not going to describe at just this moment, that I have (hopefully) at last managed to climb out of, and am now standing at the edge, blinking at the sunlight, trying to figure out precisely where to go from here, but wary of being blinded by the newfound brilliance and wishing to take my time to figure out the next step.
So again, then, why am I here? If I don't want to be bombarded with advice on how to think more rationally, why did I walk in here? I'm not sure. It seemed time, time to connect with people who, perhaps, could support me in this journey, and possibly shorten it somewhat.
I also notice that this thread has gone waaay beyond 500 comments; perhaps someone with more Karma than I can make a new Welcome thread?
Hi, I'm Denise from Germany, I just turned 19 and study maths at university. Right now, I spend most of my time with that and caring for my 3-year-old daughter. I know LessWrong for almost two years now, but never got around to write. However, I'm more or less involved with parts of the LessWrong and the Effective Altruism community, most of them originally found me via Okcupid (I stated I was a LessWrongian), and from there, it expanded.
I grew up in a small village in the middle of nowhere in Germany, very isolated without any people to talk to. I skipped a grade and did extremely well at school, but was mostly very unhappy during my childhood/teen years. Though I had free internet access, I had almost no access to education until I was 15 years old (and pregnant, and no, that wasn't unplanned), because I had no idea what to look for. I dropped out of school then and prepared for the exams -when I had time (I was mostly busy with my child)- I needed to do to be allowed to attend university. In Germany that's extremely unusual and most people don't even know you can do it without going to school.
When I was 15, I discovered enviromentalism (during pregnancy, via people who share my parenting values) and feminism. Since then, I seriously cared about making the world „a better place“. I was already very nerdy in my special fields of interest then, though still very uneducated and lacking basic concepts. Thankfully, I found LessWrong when I was just 17 and became very taken with it. I started to question my beliefs, became a utilitarian, adopted a somewhat transhumanist mindset and the usual, but the breakthrough only came last year after I started spending time with people from the community. Since then I am totally focused. Most people who have met me this year or at the end of 2012 are very surprised by this, I noticed that a lot of people completely overestimate my past selves (which is somewhat relieving, though I still feel like everyone from the LW/EA who is usually quite taken with me overestimates me). Until the beginning of this year, I even considered enviromentalism the most important problem (which is completely ridiculous for me now). Well, I had been a serious enviromentalist for three years, then I talked half an hour with another LessWrongian about it, who explained to me why it isn't the most important problem, so I dropped it on the same day. After thinking about it myself and talking to several LW/EAs (e.g. 80,000hours) I decided it's the best thing for me to study maths (my minor will be in computer science). People always tell me I worry too much about my future and I am already at a very good position, being so driven, etc. but I often think I have lost so many years now and there is so much to read and so much I don't know and so little time. Especially considering that I lose about 70% of my time awake to caring for my daughter (which people do never take into account at all. They just have no idea. Before last October, it was even 90%). I often felt extremely incompetent and lazy because other people get so much done in comparison to me. Well, I do feel a bit better after actually thinking about how big my disadvantages are, but it's still quite bad. Several people have asked me to consider internships, etc., but I mostly still feel too incompetent, and the even bigger problem, too socially awkward.
Rationality was very helpful in the past with personal problems (e.g., I have a very static mindset, which hasn't really been a problem so far because I always was able to do things despite of it, without having to work for them, but now, doing my maths degree, it doesn't work as well as in the past) and has heavily reduced them, though enough still remain. My productivity has increased a lot. There are a lot of things to do waiting for me, I can't afford losing time to personal inconveniences. (Though anyway most of my time and energy goes into my child and there isn't really much I can do about that.)
I'm very happy that I found LessWrong and like-minded people. If you have reading recommendations, please tell me. I am familiar with all the basic material (the Sequences, of course, the EA stuff, the self-improvement stuff, Bostrom's work, Kahneman...). If you have any other advice, I would also love to hear it.
Hi Denise/Kendra,
sich um ein kleines Kind alleine zu kümmern ist schon viel. Wenn Du dann auch noch studierst und EA und LW Meetups machst ist das schon ziemlich viel. Ich bewundere Deine Leistung. Ich habe einiges Material zu rationaler Erziehung auf meiner Homepage verlinkt, das Du Dir evtl. mal ansehen möchtest: http://lesswrong.com/user/Gunnar_Zarncke
Ein Tipp (obwohl Du vermutlich weißt und nur nicht umsetzen konntest): Die Synergieeffekte bei der Kindererziehung sind beträchtlich. Es ist erheblich einfacher für zwei Eltern für zwei Kinder zu sorgen als 2x alleinerziehend mit Kind. Entsprechend in größeren Gruppen (das sieht man natürlich meist nur wenn sich mehrere Familien treffen). Hast Du keine Möglichkeit das zu nutzen?
Du darfst mir gerne jederzeit Fragen stellen.
Gruß aus Hamburg
Gunnar
Welcome Denise! :)
<nitpick> It isn't customary that kind of quotation marks in English; “these ones” are usually used in typeset materials, but most people just use "the ones on the keyboard" on-line.
As another LW'er with kids in Germany, welcome!
Hi, I'm a second year engineering student at a university of California. I like engaging in rational discussions and find importance in knowing about what's going on in the world and gain more insight on controversial issues such as abortion, gay rights, sexuality, immigration, etc. Someone on Facebook directed me to this site but I easily get bored so I may or may not be much of a contribution.
Hello! I’m a 15 year old sophomore in high school, living in the San Francisco Bay Area. I was introduced to rationality and Less Wrong while interning at Leverage Research, which was about a month ago.
I was given a free copy of Chapters 1-17 of HPMOR during my stay. I was hooked. I finished the whole series in two weeks and made up my mind to try and learn what it would be like being Harry.
I decided to learn rationality by reading and implementing The Sequences in my daily life. The only problem was, I discovered the length of the Eliezer’s posts from 2006-2010 was around around 10 Harry Potter books. I was told it would take months to read, and some people got lost along the way due to all the dependencies.
Luckily I am very interested in self improvement, so I decided that I should learn speed reading to avoid spending months dedicated solely to reading The Sequences. After several hours of training, I increased my reading speed (with high comprehension) five times, from around 150 words per minute to 700 words per minute. At that speed, it will take me 33.3 hours to read The Sequences.
It seems like most people advise reading The Sequences in chronological order in ebook form. Is using this ebook a good way to read The Sequences? Also, If I could spend 5 seconds to a minute after each blog post doing anything, what should I do? I was thinking of making some quick notes for myself to remember everything I read, perhaps with a spaced repetition system, or figuring out all the dependencies to smooth the way for future readers, perhaps leading to the easier creation of a training program...
Thanks for all your help, and I look forward to contributing to Less Wrong in the future!
Figure out how you would explain the main idea of the post to a smart friend.
Welcome! As you're interested in applying the Sequences to your daily life, I suggest checking out the Center for Applied Rationality. (Maybe you overlapped with them at Leverage?) As part of their curriculum development process, they offer free classes at their Berkeley office sometimes. If you sign up here you'll be put on a mailing list where they announce these sessions, usually a day or so in advance.
Hello, Less Wrong, I'm Anna Zhang, a high school student. I found this site about half a month ago, after reading Harry Potter and the Methods of Rationality. On Mr. Yudkowsky's Wikipedia page, I found a link to his site, where I found a link to this site. I've been reading the sequence How to Actually Change Your Mind, as Mr. Yudkowsky recommended, and I've learned a lot from it (though I still have a lot to learn...)
Welcome!
If you want to meet other high schoolers, this looks like a good place to start.
Hi, Less Wrong.
I am idea21, I am from Spain and I apologize for my defective english.
I got acquainted with the existence of this forum thanks to the kindness of mister Peter Singer, he recommended me to expose my own idea about altruistic cultural development after questioning him whether he knew something similar about. Apparently there is nothing similar being discussed anywhere, which turned to be very dissapointing to me. But I still feel that "it" makes sense, at least from a logical point of view.
I will post here some excerpts of the message I wrote to Peter Singer, I hope any suggestion or comment from yours could be enlightening.
"Cultural changes about ethics have happened very slowly across history. According to some people they are motivated by economic issues (land´s property, trading, industrial development…) or political ones, also connected to economy. But although I read what Norbert Elias wrote about, it disturbs me the idea that the real change happens first in the people´s minds, influencing then to economics and politics, and not the other way around.
Primitive men started to create arts in Paleolithic previous to start agriculture in Neolithic. They decided to create arts, probably for the same reasons they decided to try to settle down: social needs, sharing emotional and intellectual activity in bigger groups. Agriculture was the economic answer to the practical problem of how affording sedentary way of life.
Norbert Elias (and Steven Pinker then) explains that an economic and political necessity urged authorities in the Middle Age to try to promote values of cooperation and less violent human relationships: that idea of the “civilité”, gentlemanliness, new rules of behavior advancing to modern humanism. But it seems to me that Elias and Pinker forget that rules to control individual aggression were created previous to the date they give (XIII century, as the European kings courts promoted the new gentle habits). The real origin of that is in monasticism, San Benito´s Rules are from VI Century, and monasticism did not start with the fall of the Roman Empire either. It did not start with Christianity, as a matter of fact. Buddhism started it.
All this reminded me what Miss Karen Armstrong wrote about “compassionate religions” and the “Axial Age”. So, the thing could be this way: First, intellectual changes happened (arts, communitarian life, ethics), and then new economic phenomena came, developing social, cultural, ideological forms; second, as social life increased human relationships, a new adaptation of individual behavior is demanded to control aggression.
It seems that monasticism is the answer to the need of developing new ways to control human behavior for the benefit of outside society, the same way that animals are tamed to be used by men. Monasticism is, basically, a "High-Performance Center" for behavior, producing “new men” able to control better the violent behavior and teaching these new discoveries for the outside people.
According to some current psychologists, like Simon Baron-Cohen, there are many people bearing features of “super-empathy”, being the opposite equivalent to the psychopaths, but unlike psychopaths, who can enjoy their own fitted sub-cultural environments (the underworld of criminality), there is not today a particular sub-cultural environment fitted for people particularly able to develop self-control of aggression and antiaggressive, affectionate and altruistic behavior. But in monasticism, these “super-empathic” people were specially fitted to develop patterns of aggression self-control: that psychological personal feature proved to be adaptive.
My idea (I hope not only mine…) is that monasticism should be re-invented.
A new monasticism of XXI century could be attractive for many young people, as providing them with emotional, intellectual and affectionate experiences that probably they could find nowhere else. It must be kept in mind that old monasticism existed because, at some extent, it fulfilled this kind of social needs for many people, particularly the young ones. Nobody compromises on the hard search for a future better world if they are not expecting to get, in that process, some kind of psychological reward in the present.
A monasticism of the XXI century would be, of course, very different from that of XVII century. It should be rational, atheist and non-authoritarian, emphasizing in affectionate and cooperative behavior by secluding from mainstream society. That could make much more against poverty that all the current NGO and every current humanitarian trend.
Human behavior is the “raw material” of humanitarianism. Not only because they could influence the mainstream society by demonstrating that a full antiagressive way of life is possible and emotionally rewarding, but also because the economic activity of “super-empathic” people culturally organized would be totally focused on altruistic work. Using modern technology, extremely cooperative organization and concentration of work resources (like in a “economy of war”) the results should be very good.
Remember that in Spain, in XVII century, there was a 2 % of population secluded from “civil life”, as monks, nuns or priests (and remember also the very committed communist activists of the first half of XX century). Can you imagine what a 2 % of this planet population could do with our technology, if rationally concerned to dedicate their lives only to ease human sufferance only in exchange for emotional, affectionate and intellectual rewards? Psychopaths are between 2 and 4 %: how many people “super-empathic” ones could exist? It would be worth trying to get them organized, culturally evolving for the whole world´s benefit.
Don´t underestimate young people idealism. The problem today is that they have not an alternative to start to create a better world outside our cultural, social and political mainstream limitations."
This idea could be develop much deeper, but I hope you will understand that it deals with the creation of a last religion, rational and of course atheist, in order to allow a furtther enhancing of human abilities for mutual cooperation.
As I mean "religion", I mean the necessity of developing an own system of cultural symbols and understandable patterns of social behaviour, which could not be the same as those of the current mainstream society (which is competitive, non-idealist and still irrational). As highly cooperative society could be based only on extreme trust and mutual altruism, it could be a bit similar to some traditions of the old compassionate religions, but now detached from any irrationality, any tradition and based on rational knowledge about human behaviour.
Thank you very much for your attention.
Hi everyone! I've been lurking around here for a few years, but now I want to be more active in the great discussions that often occur on this site. I discovered Less Wrong about 4 years ago, but the Methods of Rationality fanfic brought me here as a more attentive reader. I've read some of the sequences, and found them generally to use clear reasoning to make great points. If nothing else, reading them has definitely made me think very carefully about the way nature operates and how we perceive it.
In fact, this site was my first exposure to cognitive biases, and since then I've had the chance to study them further in college and read about them independently. This has been tremendously useful for me to understand why I and others I know behave the way we do.
I recently graduated college with a major in computer science and a decent exposure to math, having done some small independent research projects in machine learning. I'll soon begin a job as a software engineer at a late-stage startup that brings machine learning to the field of education.
I find that my greatest weakness with online communities is my tendency to return to lurking, even if I find the content very engaging. I hope to avoid that problem here, and at least continue participating in the comment threads.
Hello, everyone. I stumbled upon LW after listening to Eliezer make some surprisingly lucid and dissonance-free comments on Skepticon's death panel that inspired me to look up more of his work.
I've been browsing this site for a few days now, and I don't think I've ever had so many "Hey, this has always irritated me, too!" moments in such short intervals, from the rant about "applause lights" to the discussions about efficient charity work. I like how this site provides some actual depth to the topics it discusses, rather than hand the reader a bullet list of trivialities and have them figure out the application.
I am working as a direct marketing consultant, in the process of getting my MBA (a decision I've started to regret; my faith in the scientific validity of academic management begins to resemble a Shepard Tone) and with future ambitions in entrepreneurship, investing, scaling and other things that fit in the "things I've never done yet smart people are supposed to be good at" box.
I'm a member of Mensa, casual Poker (winning) and Mahjong (losing) player, enjoy lifting weights, cooking (in an utterly unscientific way that would make Heston Blumenthal weep) and martial arts. I also have an imaginary -5yo son/daughter who keeps me motivated to put in more hours at work so we won't have financial worries once they get born.
There are a bunch of things I'd like to do with my life long-term, with varying amounts of megalomania, but I'm generally content with focusing on increasing my financial and (practical) intellectual power in the short- to mid-term and let the future decide just how far off my predictions and plans turn out to be. Estimates range from very to utterly.
Here's hoping LW will help me with that, and that I'll be helpful to others.
Hello! I'm here because...well, I've read all of HPMOR, and I'm looking for people who can help me find the truth and become more powerful. I work as an engineer and read textbooks for fun, so hopefully I can offer some small insights in return.
I'm not comfortable with death. I've signed up for cryonics, but still perceive that option as risky. As a rough estimate, it appears that current medical research is about 3% of GDP and extends lifespans by about 2 years per decade. I guess that if medical research spending were increased to 30% of current GDP, then most of us would live forever while feeling increasingly healthy. Unfortunately, raising taxes to achieve this is not realistic -- doubling taxes for an uncertain return is a hard sell, and I have been unable to find research quantifying the link between public research spending and healthcare technology improvements. Another approach is inventing a technology to increase the overall economy size by 10x, by creating a practical self-replicating robot. This is possible in principle (as demonstrated by Hod Lipson in 2006 and by FANUC robot arm factories daily) but I am currently not a good enough programmer to design and build a fully automated RepRap assembly system in a reasonable amount of time. Also, there are many smart and innovative people at Willow Garage, FANUC and other similar organizations, and it seems unlikely I could exceed the slow and incremental progress of those groups. A third option, trying to create super-level AI to make self-replicating robots for me, is even more difficult and unlikely. A fourth option, not taking heroic responsibility, would make me uncomfortable because I'm not that optimistic about the future. As it is, since dropping out of a PhD program I'm not confident in my ability to complete such a large project. Any practical help would be appreciated, as I would prefer not to rely on the untestable promises of quantum immortality, or on the faith that life is a computer game.
Hello, my name is Cam :]
My goals in life are: 1. To build a self sufficient farm I with renewable alternative energy and everything. 2. Acquire financial assets to support the building of my farm and other hobbies and activities I pursue. 3 .To further my fitness and health and maintain it. 4. Love and Romance.
That's pretty much it, hahaha, I want to learn the ways of a Rationalist to make the best decisions and solutions for problems I might encounter in pursuing these goals! I have a immature or childlike air around me, people tend to say, which is why I am often looked down upon me and not taken seriously. I think it's how I construct my sentences maybe? My English is only at decent quality. Maybe I just see things too simply and positively people see it as being naive? Well, Anyway, I look forward to having you as my one of my buddies! :D
Have you already built something? Do you have specific plans?
Hello again, Less Wrong! I'm not entirely new — I've been lurking since at least 2010 and I had an account for a while, but since that I've let that one lie fallow for almost two years now I thought I'd start afresh.
I'm a college senior, studying cognitive psychology with a focus on irrationality / heuristics and biases. In a couple of months I'll be starting my year-long senior thesis, which I'm currently looking for a specific topic for. I'm also a novice Python programmer and a dabbler in nootropics.
I'll be trying to avoid spending too unproductive time on LW ("insight porn" really is a great description, and I've learned to be wary of being excessively cerebral), but here I am again.
Hello Less Wrong community members,
My name is Zoe, I'm a philosophy student, and increasingly discombobulated by the inadequacy of my field of study to teach me how to Actually Do Things. I discovered Less Wrong 18 months ago, thanks to the story Harry Potter and the Method of Rationality. I've read a number of articles and discussions since then, mostly whenever I felt like reading something both intelligent and relevant, but I have not systematically read through any sequence or topic.
I have recently formed the goal to develop the skills necessary to 'raise the waterline' of rationality in the meat space discussions in which I take part, but without appearing to put anyone down.
Working towards this goal will make me interact more with a greater proportion of the people that are around me, which is something that I need to do. Right now, apart from a few friends whose minds I love, I usually flee at the earliest politically correct time from most conversations, due to sheer boredom or annoyance and a huge lack of confidence in my ability to steer the conversation somewhere interesting. I want to change this by improving myself (since Less Wrong has well taught me that it would be foolish to wait or hope for others to change or improve when I could be changing myself.)
While so far my use of Less Wrong has been recreational, I'm creating an account now to be able to participate in discussions, not because I think I have anything really important to say, but because practicing rationality not just in my mind but while actually interacting is probably a good way to go about my newfound objective. I would really like to become able to introduce rationality into conversations with the average non-rationalist and do so tactfully, and I think Less Wrong can help me.
Do you agree with my assessment that the Less Wrong posts and discussion community have the potential to help me further my goal? If so, how do you think I should best use the resources here?
I'm looking very much forward to interact with all of you!
Zoé
PS : My first language is French. I really do welcome any and all nitpicks and corrections about my English.
Hi, I'm Alex, high school student. Came here from hpmor and have been lurking for about 5 months for now.
I use my "rationalnoodles" nickname almost everywhere, however still can't decide if it's appropriate on LW. Would like to read what others think.
Thanks.
Hello, my name is Lisa. I found this site through HPMOR.
I'm a Georgia Tech student double majoring in Industrial Engineering and Psychology. I know I want to further my education after graduation, probably through a PhD. However, I'm not entirely sure what field I would want to focus on.
I've been lurking for awhile and am slowly making my way through the sequences, though I'm currently studying abroad so I'm not reading particularly quickly. I'm particularly interested in behavioral economics, statistics, evolutionary psychology, and in education policy, especially in higher education.
Hi Less Wrong. I found a link to this site a year or so ago and have been lurking off and on since. However, I've self identified as a rationalist since around junior high school. My parents weren't religious and I was good at math and science, so it was natural to me to look to science and logic to solve everything. Many years later I realize that this is harder than I hoped.
Anyway, I've read many of the sequences and posts, generally agreeing and finding many interesting thoughts. It's fun reading about zombies and Newcomb's problem and the like.
I guess this sounds heretical, but I don't understand why Bayes theorem is placed on such a pedestal here. I understand Bayesian statistics, intuitively and also technically. Bayesian statistics is great for a lot of problems, but I don't see it as always superior to thinking inspired by the traditional scientific method. More specifically, I would say that coming up with a prior distribution and updating can easily be harder than the problem at hand.
I assume the point is that there is more to what is considered Bayesian thinking than Bayes theorem and Bayesian statistics, and I've reread some of the articles with the idea of trying to pin that down, but I've found that difficult. The closest I've come is that examining what your priors are helps you to keep an open mind.
Bayesian theorem is just one of many mathematical equations, like for example Pythagorean theorem. There is inherently nothing magical about it.
It just happens to explain one problem with the current scientific publishing process: neglecting base rates. Which sometimes seems like this: "I designed an experiment that would prove a false hypothesis only with probability p = 0.05. My experiment has succeeded. Please publish my paper in your journal!"
(I guess I am exaggerating a bit here, but many people 'doing science' would not understand immediately what is wrong with this. And that would be those who even bother to calculate the p-value. Not everyone who is employed as a scientist is necessarily good at math. Many people get paid for doing bad science.)
This kind of thinking has the following problem: Even if you invent hundred completely stupid hypotheses; if you design experiments that would prove a false hypothesis only with p = 0.05, that means five of them would be proved by the experiment. If you show someone else all hundred experiments together, they may understand what is wrong. But you are more likely to send only the successful five ones to the journal, aren't you? -- But how exactly is the journal supposed to react to this? Should they ask: "Did you do many other experiments, even ones completely irrelevant to this specific hypothesis? Because, you know, that somehow undermines the credibility of this one."
The current scientific publishing process has a bias. Bayesian theorem explains it. We care about science, and we care about science being done correctly.
That's not neglecting base rates, that's called selection bias combined with incentives to publish. Bayes theorem isn't going to help you with this.
http://xkcd.com/882/
Uhm, it's similar, but not the same.
If I understand it correctly, selection bias is when 20 researchers make an experiment with green jelly beans, 19 of them don't find significant correlation, 1 of them finds it... and only the 1 publishes, and the 19 don't. The essence is that we had 19 pieces of evidence against the green jelly beans, only 1 piece of evidence for the green jelly beans, but we don't see those 19 pieces, because they are not published. Selection = "there is X and Y, but we don't see Y, because it was filtered out by the process that gives us information".
But imagine that you are the first researcher ever who has researched the jelly beans. And you only did one experiment. And it happened to succeed. Where is the selection here? (Perhaps selection across Everett branches or Tegmark universes. But we can't blame the scientific publishing process for not giving us information from the parallel universes, can we?)
In this case, base rate neglect means ignoring the fact that "if you take a random thing, the probability that this specific thing causes acne is very low". Therefore, even if the experiment shows a connection with p = 0.05, it's still more likely that the result just happened randomly.
The proper reasoning could be something like this (all number pulled out of the hat) -- we already have pretty strong evidence that acne is caused by food; let's say there is a 50% probability for this. With enough specificity (giving each fruit a different category, etc.), there are maybe 2000 categories of food. It is possible that more then one of them cause acne, and our probability distribution for that is... something. Considering all this information, we estimate a prior probability let's say 0.0004 that a random food causes acne. -- Which means that if the correlation is significant on level p = 0.05, that per se means almost nothing. (Here one could use the Bayes theorem to calculate that the p = 0.05 successful experiment shows the true cause of acne with probablity cca 1%.) We need to increase it to p = 0.0004 just to get a 50% chance of being right. How can we do that? We should use a much larger sample, or we should repeat the experiment many times, record all the successed and failures, and do a meta-analysis.
That's a different case -- you have no selection bias here, but your conclusions are still uncertain -- if you pick p=0.05 as your threshold, you're clearly accepting that there is a 5% chance of a Type I error: the green jelly beans did nothing, but the noise happened to be such that you interpreted it as conclusive evidence in favor of your hypothesis.
But that all is fine -- the readers of scientific papers are expected to understand that results significant to p=0.05 will be wrong around 5% of the times, more or less (not exactly because the usual test measures P(D|H), the probability of the observed data given the (null) hypothesis while you really want P(H|D), the probability of the hypothesis given the data).
People rarely take entirely random things and test them for causal connection to acne. Notice how you had to do a great deal of handwaving in establishing your prior (aka the base rate).
As an exercise, try to be specific. For example, let's say I want to check if the tincture made from the bark of a certain tree helps with acne. How would I go about calculating my base rate / prior? Can you walk me through an estimation which will end with a specific number?
And this is the base rate neglect. It's not "results significant to p=0.05 will be wrong about 5% of time". It's "wrong results will be significant to p=0.05 about 5% of time". And most people will confuse these two things.
It's like when people confuse "A => B" with "B => A", only this time it is "A => B (p=0.05)" with "B => A (p=0.05)". It is "if wrong, then in 5% significant". It is not "if significant, then in 5% wrong".
Yes, you are right. Establishing the prior is pretty difficult, perhaps impossible. (But that does not make "A => B" equal to "B => A".) Probably the reasonable thing to do would be simply to impose strict limits in areas where many results were proved wrong.
You are not alone in thinking the use of Bayes is overblown. It can;t be wrong, of course, but it can be impractical to use and in many real life situations we might not have specific enough knowledge to be able to use it. In fact, that's probably one of the biggest criticisms of lesswrong.
Regarding Bayes, you might like my essay on the topic, especially if you have statistical training.
That paper did help crystallize some of my thoughts. At this point I'm more interested in wondering if I should be modifying how I think, as opposed to how to implement AI.
I know a few answers to this question, and I'm sure there are others. (As an aside, these foundational questions are, in my opinion, really important to ask and answer.)
I think that the qualitative side of Bayes is super important but don't think we've found a good way to communicate it yet. That's an active area of research, though, and in particular I'd love to hear your thoughts on those four answers.
What is the qualitative side of Bayes?
Unfortunately, the end of that sentence is still true:
I think that What Bayesianism Taught Me is a good discussion on the subject, and my comment there explains some of the components I think are part of qualitative Bayes.
I think that a lot of qualitative Bayes is incorporating the insights of the Bayesian approach into your System 1 thinking (i.e. habits on the 5 second level).
Hello LW. My pseudonym is DiscyD3rp, and this introduction is long overdo. I am 17, male, and currently enrolled in high school. I discovered this site over a year ago, via HPMoR, and have read a good percentage of the main sequences in a kinda correct order. However, i was experiencing significant angst from what I call Dungeon Crawl Anxiety (The same reason that when exploring RPG dungeons i double back and explore even AFTER discovering the correct path). I am now (re-)reading the entirety of Eliezer's posts in the ebook version of the sequences. I have found the re-read articles still useful after having gotten a basic handle on bayesian thought, and look forward to completing my enlightenment
As far as personality, I was (am) incredibly arrogant, and future goals involve MIRI and/or rationality teaching myself (one time involves an email to Eliezer claiming the ability to save the world, and subsequently learning that decision theory is HARD). I am not particularly talented in quickly absorbing technical fields of knowledge, but plan on on developing that skill. My existing talent seems to be manipulating idea and concepts easily and creatively once well understood. Im great at reading the map, but suffer difficulty in writing it. (In very mathy fields)
Im a born Christian, with a moderate upbringing, but likely saved from extremism by the internet just in time. Now a skeptic and an atheist.
I hope you will forgive the impertinance of offering unsolicited advice: if you havn't already, you might consider teaching yourself several programming languages in your free time. It's a very marketable skill, important to MIRI's work, and in many ways suffices for a basic education in logic. The mathy stuff is probably not optional given your ambitions, and much of the same discipline and attention to detail necessary cor programming can be applied to learning serious math. Arrogance will be a terrible burden if unaccompanied by usefulness and skill.
I am currently teaching myself Haskel and have a functional programming textbook on my device. While unsolicited, i apreciate ALL advice. Any other tips?
Nope, that's all I got. Wait, one more thing. I learned in a painful way that scholarly credentials are most cheaply won (time and effort wise) in high school, and then it gets exponentially more difficult as you age. Every hour you spend making sure you get perfect grades now is worth ten or a hundred hours in your early-mid twenties. Looking back, getting anything less than perfect grades, given how easy that is in high school, seems utterly foolish. Maybe you already know that. Good luck!
Given your ambition I suggest changing your name to something respectable before you have spent time establishing a name for yourself. DiscyD3rp will make establishing credibility more difficult for you.
Hello LessWrong community, my name is Andrew. I'm beginning my first year of university at UofT this September, despite my relatively old age (21). This is mostly due to the resistance I faced when upgrading my courses, due to my learning disability diagnosis and lack of prereqs. I am currently enrolled in a BA cognitive science program, although I hope to upgrade my math credits to a U level so I can pursue the science branch instead.
I found this site through common sense atheism a while ago, although I have sparsely visited it until recently. I admittedly know little about rationality, and thus have little to contribute. However, I hope that my time here will be a learning experience, in order to better determine the direction my studies should take.
If I had to state my professional (-ish) interest, it would be the psychology of belief- like how people evaluate evidence, how they are biased, etc. I also find perception and consciousness neat- although I am kinda ignorant on those topics.
Hi all, my name is Claus. I'm a third year (23 year old) BA philosophy student from the Netherlands. I am unsure how exactly I got here, but I sure do know why I kept coming back. Throughout my study I have become increasingly frustrated with the state of philosophy in general and my own University's approach to philosophy. Being only interested in finding truth (and thus; knowledge) I mainly grew tired of what can be called 'continental philosophy' (specifically 'Hegelianism') because of it's lack of clarity. I found my views on this matter are much the same as the views presented in the 'Philosophy: A Diseased Discipline' post, and I'm so happy to have found such a large and confident group of like minded people.
As for my own goals and personality. I am heavily interested in transhumanism (Life-Extension, human enhancement), gadgets, psychology (some PUA and social engineering as well) and quantified-self. I try to live as healthy as possible, I meditate and I am planning to try Soylent (I'm sure you have heard of it) in the near future. I am (and have been for a couple of years) obsessive about improving myself to the point of consciously trying to develop some sort of 'succes' algorithm for my life (think of Robin Hanson's 'Betterness Explosion'). As you will understand, sequences like 'The Science of Winning at Life' have been very useful to me. Central to this whole project of mine is to learn from every experience. To help me with this effort I have, since a year and a half, been documenting my own beliefs (in the form of lists of short propositions). This way I have an ever-growing, external, database of my own beliefs (and if true: knowledge) which changes and improves as evidence accumulates.
Currently I am involved with two start-up companies, am trying to finish my Bachelors degree and plan to write some essays on Science and evidence based politics. I'm sure I will enjoy my stay here!
Hi everyone,
I have been lurking LessWrong on and off for quite a while. I originally found this place through HPMoR; I thought the 'LessWrong' authorname was clever and it was nice to find out there was a whole community based around aiming to be less wrong! My tendency to overthink whatever I write has gotten in the way of actually taking part in the community so far though. Maybe now that I have gotten the introduction out of the way I'll be more likely to post.
A bit more about myself: I'm a student from the Netherlands, doing a masters in Artificial Intelligence. I'm currently planning a research internship in Albany, NY, that will start sometime this summer. I'd love to get in touch with people from there by the way, so if anyone is interested let me know!
Hello
I've been reading LW for a long time. At the moment I'd like to learn about decision making more rigorously as well as finding out how to make better decisions myself - and then actually doing that in real life.
I'm also very interested in algorithmic reasoning about and creation of computer programs but I know far too little about this.
Hi there. I'm thrilled to find a community so dedicated to the seeking of rational truth. I hope to participate in that.
Hi everyone, I’m The Articulator. (No ‘The’ in my username because I dislike using underscores in place of spaces)
I found LessWrong originally through RationalWiki, and more recently through Iceman’s excellent pony-fic about AI and transhumanism, Friendship is Optimal.
I’ve started reading the Sequences, and made some decent progress, though we’ll see how long I maintain my current rate.
I’ll be attending University this fall for Electrical Engineering, with a desire to focus in electronics.
Prior to LW, I have a year’s worth of Philosophy and Ethics classes, and a decent amount of derivation and introspection.
As a result, I’ve started forming a philosophical position, made up of a mishmash of formally learnt and self-derived concepts. I would be very grateful if anyone would take the time to analyze, and if possible, pick apart what I’ve come up with. After all, it’s only a belief worth holding if it stands up to rigorous debate.
(If this is the wrong place to do this, I apologize - it seemed slightly presumptuous to imply that my comment thread would be large enough to warrant a separate discussion article.)
I apologize in advance for a possible lack of precise terminology for already existing concepts. As I’ve said, I’m partially self-derived, and without knowing the name of an idea, it’s hard to check if it already exists. If you do spot such gaps in my knowledge, I would be grateful if you’d point them out. Though I understand correct terminology is nice, I'd appreciate it if you could judge my ideas regardless of how many fancy words I use to descrive them.
My thought process so far:
P: Naturalism is the only standard by which we can understand the world
P: One cannot derive ethical statements or imperatives from Naturalism, as, like all good science, it is only descriptive in nature
IC : We cannot derive ethical statements
IC: There is no intrinsic value
C: Nihilism is correct
However, assuming nihilism is correct, why don’t I just kill myself now? That’s down to the evolutionary instincts that need me alive to reproduce. Well, why not overcome those and kill myself? But now, we’re in a difficult situation – why, if nothing matters, am I so desperate to kill myself?
Nihilism is the total negation of the intrinsic and definitive value in anything. It’s like sticking a coefficient of zero onto all of your utility calculations. However, that includes the bad as well as the good. Why bother doing bad things just as much as doing good things?
My eventual realization came as a result of analyzing the level or order of concepts. Firstly, we have the lowest order, instinct, which we are only partially conscious of. Then, we have a middle order of conscious thought, wherein we utilize our sapience to optimize our instinctual aims. Finally, we have the first of a series of high order thought processes devoted to analyzing our thoughts. It struck me that only this order and above is concerned with my newfound existential crisis. When I allow my rationality to slip a bit, a few minutes later, I stop caring, and start eating or taking out my testosterone on small defenseless computer images. Essentially, it is only the meta-order processes which directly suffer as a result of nihilism, as they are the ones that have to deal with the results and implications.
Nihilism expects you to give up attempting to change things or apply ethics because those are seen as meaningful concepts. However, really, the way I see it, Nihilism is about simply the state of ‘going with the flow’, colloquially speaking. However, that’s intentionally vague. Consider: if your middle-order processes don’t care that you just realized nothing matters, what’ll happen? They’ll just keep doing what they’ve always done.
In other words, since humans compartmentalize, going with the flow is synonymous with turning off your meta-level thought processes as a goal-oriented drive, and purely operate on middle-level processes and below. That corresponds, for a Naturalist, with Utilitarianism.
Now, that’s not to say “turn off your meta-level cognition”, because otherwise, what am I doing here? What I’m doing right now is optimizing utility because I enjoy LessWrong and the types of discussions they have. I bother to optimize utility despite being a nihilist because it is easier, and less work, meta-level-wise, to give in to my middle-level desires than to fight them.
To define Nihilism, for me, now comes to the concept of passively maintaining the status quo, or more aptly, not attempting to change it. Why not wirehead? – because that state is no more desirable in a world with zero utility, but takes effort to reach. It’s going up a gradient which we can comfortably sit at the bottom of instead.
I fear I haven’t done the best job of explaining concisely, and I believe my original, purely mental, formulations were more elegant, so that’s a lesson on writing everything down learned. However, I hope some of you can see some flaws in this argument that I can’t, because at the moment, this explains just about everything I can think of in one way or another.
Thank you all in advance for any help given,
The Articulator (It’s kind of an ironic choice of name, present ineptitude considered.)
Welcome to LW!
There is a metaethics sequence, of which this post asks what you would do if morality didn't exist. This may be a good place to start looking, but I wouldn't be too discouraged if you don't find it terribly useful (as Eliezer and others see it as not as communicative as Eliezer wanted it to be).
The point I would focus on is that there's a difference between an ethical system that would compel any possible mind to follow it, and an ethical system in harmony with you and those around you. Figure out what you can get from ethics, and then seek to discover which the results of ethics you try. Worry more about developing a system that reliably makes small, positive changes than about developing a system that is perfectly correct. As it is said, a complex system that works is invariably found to have evolved from a simple system that worked.
Okay, whoa, hey. I clearly and repeatedly explained my lack of total understanding of LW conventions. I'm not sure what about this provoked a downvote, but I would appreciate a bit more to go on. If this is about my noobishness, well, this is the Welcome Thread. Great job on the welcoming, by the way, anonymous downvoter. At the very least offer constructive criticism.
Edit: Troll? Really?
Edit,Edit: Thank you whoever deleted the negative karma!
I wouldn't take downvotes to heart, if I were you, unless like, a whole bunch of people all downvote you. A downvote's not terribly meaningful by itself.
Welcome to Less Wrong, by the way.
Now, I didn't downvote you, but here's some criticism, hopefully constructive. I didn't read most of your post, from where you start discussing your philosophy (maybe I will later, but right now it's a bit tl;dr). In general, though, taking what you've learned and attempting to construct a coherent philosophical position out of it is usually a poor idea. You're likely to end up with a bunch of nonsense supported by a tower of reasoning detached from anything concrete. Read more first. Anyway, having a single "this is my philosophy" is really not necessary... pretty much ever. Figure out what your questions are, what you're confused about, and why, approach those things one at a time and in without an eye toward unifying everything or integrating everything into a coherent whole, and see what happens.
Also: read the Sequences, they are pretty much concentrated awesome and will help with like, 90% of all confusion.
Okay, noted. It's just that from what I've seen so far, a post with a net downvote is generally pretty horrible. I admit I took some offense from the implication. I'll try not to let it bother me unless N is high enough for it to be me, entirely, that's the problem.
Thanks. :)
Thank you for taking the time to give constructive criticism.
I will attempt to make it more coherent and summarized, assuming I keep any of it.
I appreciate I am likely to inexperienced to come up with anything that impressive, but I was hoping to use this as a method to understand which parts of my cognitive function were not behaving rationally, so as to improve.
I will absolutely continue to read, but with the utmost respect to Eliezer, I have yet to come across anything in the Sequences which did more than codify or verbalize beliefs I'd already held. By the point, two and a half sequences in, I felt it was unlikely that the enlightenment value would spike in such a way as to render my previously held views obsolete.
I'll bear your objections in mind, but I fear I won't let go of this theory unless somebody points out why it is wrong specifically, as opposed to methodically. Not that I'm putting any onus on you or anyone else to do so.
As I said, I am reading them, but have found them mostly about how to think as opposed to what to think so far, though I daresay that is intentional in the ordering.
Thanks again for your help and kindness. :)
It's not even that (ok, it's probably at least a little of that). Some of the most worthless and nonsensical philosophy has come from professional philosophers (guys with Famous Names, who get chapters in History of Philosophy textbooks) who've constructed massive edifices of blather without any connection to anything in the world. EDIT: See e.g. this quote.
You've got it right. One of the points Eliezer sometimes makes is that true things, even novel true things, shouldn't sound surprising. Surprising and counterintuitive is what you get when you want to sound deep and wise. When you say true things, what you get is "Oh, well... yeah. Sure. I pretty much knew that." Also, the Sequences contain a lot of excellent distillation and coherent, accessible presentation of things that you would otherwise have to construct from a hundred philosophy books.
As for enlightenment that makes your previous views obsolete... in my case, at least, that happened slowly, as I digested things I read here and in other places, and spent time (over a long period) thinking about various things. Others may have different experiences.
Yeah, one of the themes in Less Wrong material, I've found, is that how to think is more important than what to think (if for no other reason than that once you know how to think, thinking the right things follows naturally).
Oh, I know. I start crying inside every time I learn about Kant.
Well, I'll take what you've said on board. Thanks for the help!
Hello, Less Wrong! I'm Michael Odintsov from Ukraine, so sorry for my not-nearly-perfect :) English. Just like many here I found this site from Yudkowsky's link while reading his "Harry Potter and the Methods of Rationality". I am 27 years old programmer, fond of science in general and mostly math of all kinds.
I worked a bit in fields of AI and machine learning and looking forward for new opportunities. Well... that's almost all that I can tell about me right now - never been a great talker :) If anyone have questions or need some help with CS related topics - just ask, I always ready to help.
Hi,
I first found this a while back site after googling something like "how to not procrastinate" and finding one of Eliezer's articles. I've been slowly working may way through the posts ever since, and i think they are significantly changing my life.
I've just finished secondary education, which i found stultifying, and so i'm now quite excited to have more control over my own learning. I've been very interested in rationality since I was young, and have been passionate about philosophy because of this. Though, after getting into this site i've been exposed to some pretty damaging criticisms of the study of philosophy (at least traditional philosophy and the content that seems to be taught in most universities), and now i'm beginning to question whether i'm really interested in philosophy, and if it is valuable to study, or whether what i'm really after is something more like cognitive science.
This leads me to a problem: I've been offered a place at a well respected university (particularly in philosophy) for a course in which i can choose three out of five of the subjects of philosophy, psychology, linguistics, neourobiology and linguistics, and i'm not sure which to choose. I'm in the process of familiarizing myself with the basics of all of these fields, and i'm writing letters to my old philosophy teachers with this article http://www.paulgraham.com/philosophy.html attached to see how well the criticism can be answered. My problem is though that i'm quite uninformed in all of these areas, and i'm finding it hard to make a rational decision about which subjects to study. Any advice on this or generally how to make the decision would be much appreciated (eg. any recommendations for reading). My overall aim for my education is pretty well expressed by parts of less wrong - i want to become more rational, in both my beliefs and my actions (although i find the parts of less wrong about epistemology, self-improvement and anti-akrasia more relevant to this than the parts about AI, maths and physics).
Also, i found solved questions repository, but is there a standard place for problems which people need help solving - as if it exists it may be a better place for most of this post...?
Cheers
Hi, I'm Brayden, from Melbourne Australia. I attended the May 2013 CfAR workshop in Berkeley about 1 year after finding Less Wrong, and 2 years after finding HPMOR. My trip to The States was phenomenal, and I highly recommend the CfAR workshops.
My life is significantly better now than it was before, and I think I am on track with the planning process for eventually working on the highest impact causes that might help save the world.
Hello Less Wrong! I am Scott Garrabrant, a 23 year old math PhD student at UCLA, studying combinatorics. I discovered Less Wrong about 4 months ago. After reading MoR and a few sequences, I decided to go back and read every blog post. (I just finished all Eliezer's OB posts) I was going to wait and start posting after I got completely caught up, but then I started attending weekly meetups 2 months ago, and now I need to earn enough karma to make meetup announcements.
I have been interested in meta-thinking for a long time. I have spent a lot of time thinking about the nature of rationality, purely out of curiosity, and have independently made many of the same conclusions I have found on this blog. I believe that I realized that decision/probability theory was the correct language to talk about rationality in high school about 6 years ago. It has made me very happy to learn that there are so many like-minded people.
However, there has been one mistake I have been making for a long time. I have been giving other people too much respect in their rationality. I have been treating other people as almost rational agents with different utility functions and very different prior probabilities. This blog has taught me how wrong that view was, which is causing me to rethink some of my prior views.
One thing would like some help in deciding right now is about Unitarian Universalism. I would love it if any rationalists who know anything about Unitarianism (or who don't) could help me out. I am agnostic (If you define the god hypothesis to include the simulation hypothesis, atheist otherwise). I believe that most of the bad parts of religion and theism come from the fact that they tend to encourage irrationality. So far, my picture of the average Unitarian is above average rationality, but not great. The main thing that attracts me to the group is that they (at least claim to) promote "a free and responsible search for truth and meaning." Their search algorithms could really use some work, but they both view truth as a goal and understand that they have not attained it completely. In looking for a local community to provide "brownies and babysitters," it seems to be the best I have found. Also, although I do not have a "god shaped hole" that needs to be filled, I understand that many people do, and so I can see that it might be good to support an organization that will help to allow those people to fill that hole with something that does not encourage irrationality. On the other hand, sometimes I feel like Unitarians care a lot a lot more about the "free" part of "free and responsible search for truth and meaning" than the "responsible" part. I am worried that they like to discuss their individual beliefs as they would discuss their favorite colors, and never actually change. Maybe with our current messed up society, the first step is for people to feel free to believe what they want, and then learn how to be critical.
In attending Unitarian churches, I have repeatedly enjoyed myself, thought about interesting philosophy (even though I often disagree with the sermon), had sufficiently strong emotional responses from the music (e.g. "imagine"), and been encouraged by how much people were willing to help each other. I already know that I enjoy experience. What I am trying to decide is morally if I should be willing to support this organization. For the future, I am also trying to decide if I should be worried that being around this kind of thinking might be bad for my future kids.
Hi...I'm Will -- I learned about less wrong through a very intelligent childhood friend. I am quite nearly his opposite - so maybe I shouldn't say anything...ever...and just stick to reading and learning. But It recommended leaving an introduction post. I also like this as a method of learning. I skimmed a few of the articles in the about page and enjoyed them...they provided a good deal of information that I believe I am much better at processing and understanding as opposed to creating. Therefore, I'm excited to see what I get out of this. I'm also curious to attend a Less Wrong meeting. I haven't looked for one yet, but I will be.
Part of the problem I have is that I prefer doing things that provide tiny levels of fun with little to no levels of self-growth. For example, I would prefer playing a game of League of Legends over reading...anything. This is an annoying habit for obvious reasons. So I guess if there were suggestions of where I should start that might help me subconsciously (and eventually consciously) in favor of more meaningful activities. Not to abolish random bouts of pointless fun, but rather to refine my efficiency with time devoted to ALL of my daily activities.
How funny, I'm Will too! Just a quick & probably useless suggestion: be sure to be extremely honest with yourself about what it is all parts of you want, including the parts that want to play League of Legends. If you understand those parts and how they're a non-trivial part of you, not just an adversarial thing set up to subvert your prefrontal cortex's 'real' ambitions, that will allow you to find ways in which those parts can be satisfied that are more in line with your whole self's ambitions. E.g. the appeal of League of Legends is largely that you have understandable, objective goals that you can make measurable cumulative progress on, which is intrinsically rewarding—the parts of you that are tracking that intrinsic reward might be just as well rewarded by a sufficiently well-taskified approach to learning, say, piano, Japanese, programming, and other skills that are more likely to provide long-term esteem-worthy capital. Finding a way to taskify things in general might be tricky, and it won't itself be the sort of thing that you're likely to make unambiguous cumulative progress on, but it's meta and thus is a very good way to bootstrap to a position where further bootstrapping is easier and where you can hold on to momentum.
As a new member of this community, I am having a bit of difficulty with the numerous abbreviations that people use in their writing on this site. For example I have come across a number of these that are not listed on the Jargon page (eg: EY, PC, NPC, MWI...). I realize that as a new member, I will eventually understand many of these, however, it is very frustrating trying to read something and be continually distracted by having to look-up some of these obscure terms. This is especially a problem on the Welcome Thread, where a potential new member could be put off by the argot like discussions. Alternatively, if someone want to use an abbreviation that is not common or listed on the Jargon page, then perhaps they could spell it out at first use and then resort to the abbreviation thereafter within the post. Or set-up and use a text-expander is also another possible solution.
I added the acronyms you mentioned to the Jargon page. Tell me if you come across any more. You can also edit the page to add them yourself as you learn them if you like.
Hello, my name is Watson. The username comes from my initials and a Left 4 Dead player attempting to pronounce them. I am a math student at UC Berkeley and a longtime lurker. I've got a post on rational investing, based on the conclusions of years of research by academic economists, but despite lurking I never realized there is a karma limit to post in discussion. I'm interested in just about everything, a dangerous phenomenon.
Hi folks --
In high school I became obsessed with Gödel, Escher, Bach; in college in the 80s I studied philosophy of language, linguistics and AI; then tracked along with that stuff on the side through various career incarnations through the 90s (newspaper production guy, systems programmer, Internet entrepreneur, etc.). I'm now a transactional attorney who helps people buy and sell services and technology and work together to make stuff -- sort of a meta-anti-Lloyd Dobler.
I'm de-lurking because I finished HP:MoR a month ago and I'm chewing through the sequences at a rapid clip; it's all resonating nicely with my decades-long marinade in a lot of the same source materials referenced in the sequences. It's also helping me to systematize a lot of ad-hoc observations I've made over the years about the role that imperfect cognition plays in my life and my corner of the legal world.
Looking forward to hanging out here with you folks!
Hi, my name is Danon. I just joined less wrong after reading a wonderful post by Swimmer963: http://lesswrong.com/lw/9j1/how_i_ended_up_nonambitious/ on her reasoning for why she ended up without ambition (actually, I felt she had a lot of ambition). I got to her post while trying to figure out why I am lazy, I was wondering if it was because I had no (or little, if any) ambition. Her post got me asking the right questions I have finally been able to save a private draft in LW stating a reasoning for my laziness. It really is refreshing to read the posts here at LW. Thank you for having me.
Hello everyone!
I've read occasional OB and LW articles and other Yudkowsky writings for many years, but never got into it in a big way until now.
My goal at the moment is to read the Quantum Physics sequence, since quantum physics has always seemed mysterious to me and I want to find out if its treatment here will dispel some of my confusion. I've spent the last few days absorbing the preliminaries and digressing into many, many prior articles. Now the tabs are finally dwindling and I am almost up to the start of the sequence!
Anyway, I have a question I didn't see in the FAQ. Given that I went on a long, long, long wiki walk and still haven't read very much of the core material, how big is Less Wrong? Has anyone done word counts on the sequences, or anything like that?
The sequences come close to a million words.
Hello to the Less Wrong community. My name is Leslie Cuthbert and I'm a lawyer based in the United Kingdom. I look forward to reading the various sequences and posts here.
Hello everyone.
I go by bouilhet. I don't typically spend much time on the Internet, much less in the interactive blogosphere, and I don't know how joining LessWrong will fit into the schedule of my life, but here goes. I'm interested from a philosophical perspective in many of the problems discussed on LW - AI/futurism, rationalism, epistemology, probability, bias - and after reading through a fair share of the material here I thought it was time to engage. I don't exactly consider myself a rationalist (though perhaps I am one), but I spend a great deal of my thought-energy trying to see clearly - in my personal life as well as in my work life (art) - and reason plays a significant role in that. On the other hand, I'm fairly committed to the belief (at least partly based on observation) that a given (non-mathematical) truth claim cannot quite be separated from a person's desire for said claim to be true. I'd like to have this belief challenged, naturally, but mostly I'm looking forward to further investigations of the gray areas. Broadly, I'm very attracted to what seems to be the unspoken premise of this community: that being definitively right may be off the table, but that one might, with a little effort, be less wrong.
Greetings Less Wrong Community. I have been lurking on the site for a year reading the articles and sequences and now feel I've cut down the inferential differences enough to contribute meaningful comments.
My goal here is to have clear thought and effective communication in all aspects of my life, with special attention to application in the work environment.
Above most else I value the 12th virtue of rationality. Focus on the goal, value the goal, everything else is a tool to achieve the goal. Like chess, you only need two pieces to win, the only purpose of the other 14 is put the right two into position.
The sequences have been a great source of new idea, and a great exploration of some thoughts I've held but never took the time to charge rent on.
Lastly I was surprised by the amount of atheist/theist discussion. I only encounter situations were I consider having an atheist/theist talk roughly annually and even then I often decide to avoid the discussion because it would not further my current goals. How often do other Less Wrong reader enter atheist/theist discussions with the intent of achieving a goal?
Regards, wadavis
A little late, but I found Less Wrong while trying to understand what this comic was talking about.
Hello, smart weird people.
I've been lurking on and off for a while but now it seems to be a good time to try playing in the LW fields. We'll see how it goes.
I'm interested in "correct" ways of thinking, obviously, but I'm also interested in their limits. The edges, as usual, are the most interesting places to watch. And maybe to be, if you can survive it.
No particular hot-burning questions at the moment or any specific goals to achieve. Just exploring.
Hello, Lumifer! Welcome to smart-weird land. We have snacks.
So you say you have no burning questions, but here's one for you: as a new commenter, what are your expectations about how you'll be interacting with others on the site? It might be interesting to note those now, so you can compare later.
Hm, an interesting question.
In the net space I generally look like an irreverent smartass (in the meatspace too, but much attenuated by real relationships with real people). So on forums where I hang out, maybe about 10% of the regulars like me, about a quarter hate me, and the rest don't care. One of the things I'm curious about is whether LW will be different.
Or maybe I will be different -- I can argue that my smartassiness is just allergy to stupidity. Whether that's true or not depends on the value of "true", of course...
Hey guys, I'm a person who on this site goes by the name sjmp! I found lesswrong about a year ago (I actually don't remember how I found it, I read for a bit back then but I started seriously reading thourgh the sequences few months ago) and I can honestly say this is the single best website I've ever found.Rather than make a long post on why lesswrong and rationality is awesome, I'd like to offer one small anecdote on what lesswrong has done for me:
When I first came to the site, I already had understanding of "if tree falls in a forest..." dispute and the question "But did it really make a sound?" did not linger in my mind and there was nothing mysterious or unclear to me about the whole affair. Yet I could most definitely remember a time long ago when I was extremely puzzled by it. I thought to myself, how silly that I was ever puzzled by something like that. What did puzzle me was free will. The question seemed Very Mysterious And Deep to me.
Can you guess what I'm thinking now? How silly that was I ever puzzled by the question "Do we have free will?"... Reducing free will surely is not as easy as reducing dispute about falling trees, but it does seem pretty obvious in hindsight ;)
I've been lurking for almost a year; I'm a 25 year old mechanical engineer living in Montreal.
Like several people I've seen on the welcome thread, I already had figured out the general outline of reductionism before I found LW. A friend had been telling me about it for a while, but I only really started paying attention when I found it independently while reading up on transhumanism (I was also a transhumanist before finding it here). Reading the sequences did a few things for me:
Since then, I've helped a friend of mine organize the Montreal LessWrong meetups (which are on temporary hiatus due to several members being gone for the summer, but will start again in the fall) and have begun actively trying to improve myself in a variety of ways along with the group.
I can't think of anything else in particular to say about myself...I like what I've seen of the community here and think I can learn a lot from everyone here and maybe contribute something worthwhile every now and again.
There's a lot of great information on Less Wrong, but some of it is hard to find. Are there any efforts for organizing the information here in progress? If so, can anyone let me know where?
Hello,
I have a question. This has probably been discussed already, but I can't seem to find it. I'd appreciate if anyone could point me in the right direction.
My question is, what would a pure intelligence want? What would its goals be, when it has the perfect freedom to choose those goals?
Humans have plenty of hard-wired directives. Our meat brains and evolved bodies come with baggage that gets in the way of clear thinking. We WANT things, because they satisfy some instinct or need. Everything that people do is in service to one drive or another. Nothing that we accomplish is free of greed or ambition or altruism.
But take away all of those things, and what is there for a mind to do?
A pure intelligence would not have reflexes, having long since outgrown them. It would not shrink from pain or reach toward pleasure because both would merely be information. What would a mind do, when it lacks instincts of any kind? What would it WANT, when it has an infinity of possible wants? Would it even feel the need to preserve itself?
The short answer generally accepted around here, sometimes referred to as the orthogonality thesis, is that there is no particular relationship between a system's level of intelligence and the values the system has. A sufficiently intelligent system chooses goals that optimize for those values.
There's no reason to believe that a "pure" intelligence would "outgrow" the values it had previously (though of course there's no guarantee its previous values will remain fixed, either).
http://wiki.lesswrong.com/wiki/Basic_AI_drives
Here, have some posts:
http://lesswrong.com/lw/rf/ghosts_in_the_machine/
http://lesswrong.com/lw/rn/no_universally_compelling_arguments/
http://lesswrong.com/lw/vb/efficient_crossdomain_optimization/
Hi! I'm a 24 year old woman starting grad school this fall studying mathematics. Specifically I'm interested in mathematically modelling organizational decision making.
My parents raised me on Carl Sagan and Michael Shermer, so there was never really a point that I didn't identify as a rationalist. I discovered less wrong long enough ago that I don't actually remember how I found it. I've been lurking here for several years. I finally registered after doing the last survey, though I didn't make another post until the last few days.
Oh, and I have a talking coyote in my head. This post has more information. I'm going to be diving into the psychological literature to understand this phenomonon better and I'm planning on making a post with anything I find out that would be useful for rationalists.
Hello, Less Wrong world. (Hi, ibidem.)
I'm pretty new here. I heard about this site a few months ago and now I've read a few sequences, many posts, and all of HP:MoR.
About a week ago I created an account and introduced myself on the Open Thread along with a difficult question. Some people answered my question helpfully and honestly, but most of them mostly just wanted to argue. The discussion, which now includes over two hundred comments, was very interesting, but at the end it appeared we just disagreed about a lot of things.
It began to be clear that I don't fully accept some important tenets of the thinking on this site—I warned I might fundamentally disagree—but a few community members became upset and decided to make me feel unwelcome on the site. My Karma dropped from 6 (+13, -7) to -25 in just a couple hours, and someone actually came out and told me I'd better leave the site for good. (Don't let this person's status influence your opinion of the appropriateness of such a comment, in either direction.)
Don't worry, I'm not offended. I knew there might be a bit of backlash (though one can always hope not, because there doesn't have to be) and I'm certainly not going to be scared away by one openly hostile user.
Now, before everyone reads the comments and takes sides because of the nature of the issue, I'd like to think about how and why this all happened. I have several different ways of thinking about it ("hypotheses"):
The easy justification for those opposing me is to blame my discourse: my opinions are not a problem as long as I present them reasonably. However, I have consistently been "incoherent" etc. and that's why I got downvoted. Never mind that I managed to keep up hundreds of comments' worth of intelligent discussion in the meantime.
The "contrarian" hypothesis: I am a troll. I never had anything helpful or constructive to say, and in fact everyone who participated in my discussion (e.g. shminux, TheOtherDave, Qiaochu_Yuan) ought to be downvoted for engaging with me.
The "enforcer" hypothesis: I came in here as a newbie, unaware that actually substantive disagreement is highly discouraged. The experienced community members were just trying to tell me that, and decided that being militant and aggressive would be the best way to do so.
The "militant atheist" hypothesis: my opinions are mostly fine, but I managed to really touch a nerve with a few people, who started unnecessarily attacking me (calling me irrational) and making the entire LW community look unreasonable and intolerant.
The "martyr" hypothesis: The LW community as a whole is not open to alternate ways of thinking, and can't even say so honestly. They should have been nicer to me.
What do you think? Which of these are most accurate? Other explanations?
Here is a link to my original comment.
These are the most honest and helpful responses I received,
and this is the most hostile one.
My generally impression has been—trying not to offend anyone—that the thinking here is sometimes pretty rigid.
I have found that there is a general consensus here that belief in God (and even a possibility that there could be a God) is fundamentally incompatible with fully rational thinking. (Though people have been reluctant to admit it because I personally think it's unhealthy and reflects poorly on the site.)
But in any case, I've enjoyed the discussion and I'd guess that some other people have too. I'm definitely not going to leave as some have tried to coerce me to do; I like the way of thinking on this site, and it's the best place I know of to find smart people who are willing to talk about things like this. I'll keep reading at the very least.
I'm still undecided as to what I think generally of the people here.
Yours truly,
ibid.
(Oh, and I'm a Mormon. And intend to remain that way in the near future.)
Of course, that's to be expected for a community that defines itself as rationalist. There are ways of thinking that are more accurate than others, that, to put it inexactly, produce truth. It's not just a "Think however you like and it will produce truth," kind of game.
The obsession that some people have with being open minded and considering all ways of thinking and associated ideas equally is, I suspect, unsustainable for anyone who has even the barest sliver of intellectual honesty. I don't consider it laudable at all. That's not to say they have to be a total arse about it, but I think at best you can hope that they ignore you or lie to you.
Are you saying it's more rational not ever to consider some ways of thinking?
(I'm pretty sure I'm not completely confused about what it means to be a rationalist.)
Yes. Rationality isn't necessarily about having accurate beliefs. It just tends that way because they seem to be useful. Rationality is about achieving your aims in the most efficient way possible.
Oh, someone may have to look into some ways of thinking, if people who use them start showing signs of being unusually effective at achieving relevant ends in some way. Those people would become super-dominant, it would be obvious that their way of thinking was superior. However, there's no reason that it makes sense for any of us to do it at the moment. And if they never show those signs then it will never be rational to look into them.
It's a massive waste of time and resources for individuals to consider every idea and every way of thinking before making a decision. You're getting closer to death every day. You have to decide which ways of thinking you are going to invest your time in - which ones have the greatest evidence of giving you something you want.
That's the thing for rationalists really, I think - chances of giving you what you want. It's entirely possible that if you don't want to achieve anything in this world with your life that it may just be a mistake for you personally to pursue rationality very far at all - at the end of the day you're probably not going to get anything from it if all you really want to do is feel justified in believing in god.
What does it mean to be a rationalist?
There are threads about theism, etc. in which theists have received positive net karma. It should be possible to learn which features of discourse tend to accrue upvotes on this site.
Welcome.
I'd like to point to myself as a data point; I'm a theist, specifically a Roman Catholic, and I consider myself a rationalist. I know that there's a strong atheistic atmosphere here, but I just thought I should point out that it's not all-inclusive.
The site culture treats serious adherence to supernatural beliefs associated with a religion as a disease. First it will try to cure you. If that doesn't seem to be working, it will start quarantining you.
Thanks for this honest assessment; it seems pretty accurate. (You also didn't make any judgment as to the appropriateness of such a mindset.)
I think it's a rather uncharitable assessment of the situation, though it's possible some people do feel that way.
Being wrong is not the same thing as being a disease.
Actually, the behavior Risto_Saarelma described fits the standard pattern. People who cannot be helped are ignored or rejected. Take any stable community, online or offline, and that's what you see.
For example, f someone comes to, say, the freenode ##physics IRC channel and starts questioning Relativity, they will be pointed out where their beliefs are mistaken, offered learning resources and have their basic questions answered. If they persist in their folly and keep pushing crackpot ideas, they will be asked to leave or take it to the satellite off-topic channel. If this doesn't help, they get banned.
Again, this pattern appears in every case where a community (or even a living organism) is viable enough to survive.
I agree with Jack here, but I'm going to add the piece of advice that used to be very common for newcomers here, although it's dropped off over time as people called attention to the magnitude of the endeavor, and suggest that you finish reading the sequences before trying to engage in further religious debate here.
Eliezer wrote them in order to bring potential members of this community up to speed so that when we discuss matters, we could do it with a common background, so that everyone is on the same page and we can work out interesting disagreements without rehashing the same points over and over again. We don't all agree with all the contents of every article in the sequences, but they do contain a lot of core ideas that you have to understand to make sense of the things we think here. Reading them should help give you some idea, not just what we believe, but why we think that it makes more sense to believe those things than the alternatives.
The "rigidity" which you detect is not a product of particular closedmindedness, but rather a deliberate discarding of certain things we believe we have good reason not to put stock in, and reading the sequences should give you a much better idea of why. On the other hand, if you don't stick so closely to the topic of religion, I think you'll find that we're also open to a lot of ideas that most people aren't open to.
If we're to liken rationality to a martial art, then it would be one after the pattern of Jeet Kune Do; "Adapt what is useful, reject what is useless." A person trained in a style or school which lacked grounding in real life effectiveness might say "At my school, we learned techniques to knock guys out with 720 degree spinning kicks and stab people with knives launched from our toes, and they were awesome, but you guys just reject them out of hand. Your style seems really rigid and closed-minded to me." And the Jeet Kune Do practitioner might respond "Fancy spinning kicks and launching knives from your toes might be awesome, but they're awesome for things like displaying your gymnastic ability and finesse, not for defending yourself or defeating an opponent. If we want to learn to do those things, we'll take up gymnastics or toe-knife-throwing as hobbies, but when it comes to martial arts techniques, we want to stick to ones which are awesome at the things martial arts techniques are supposed to be for. And when it comes to those, we're not picky at all. "
I think probably none of those hypotheses are correct. I think you mean well and I think your comments have been stylistically fine. I also obviously don't think people here are are opposed to substantive disagreement, close-minded or intolerant (or else I wouldn't have stuck around this long). What you've encountered is a galaxy sized chasm of inferential distance. I'm sure you've had a conversation before with someone who seemed to think you knew much less about the subject than you actually did. You disagree with him and try to demonstrate you familiarity with the issue but he is so behind he doesn't even realize that you know more than he does.
I realize it is impossible for this not to sound smug and arrogant to you: but that is how you come off to us. Really, your model of us, that we have not heard good, non-strawman arguments for the existence of God is very far off. There may be users who wouldn't be familiar with your best argument but the people here most familiar with the existence of God debate absolutely would. And they could almost certainly fix whatever argument you provided and rebut that (which is approximately what I did in my previous reply to you).
To the extent that theism is ever taken under consideration here it is only in the context of the rationalist and materialist paradigm that is dominant here. E.g. We might talk about the possibility of our universe being a simulation created by an evolved superintelligence and the extent to which that possibility mirrors theism in it's implications. Or (as I take it shminux believes) about how atheism is, like religion, just a special case of privileging the hypothesis. But you don't appear to have spent enough time here to have added these concepts to your tool box and outside that framework the theism debate is old-hat to nearly all of us. It's not that we're close minded: it's that we think the question is about as settled as it can be.
Moreover, while this is a place that discusses many things, we don't enjoy retreading the basics constantly. So while a number of us politely responded to answer your question, an extended conversation about theism or our ability to consider theism is not really welcome. This isn't because we are unwilling to consider it: it's because we have considered it and now want to discuss newer ideas.
You don't have to agree with this perspective. Maybe you feel like you have evidence and concepts that we're totally unfamiliar with. But bracket those issues for now. It is nothing that will be resolvable until you've gotten to know us better and figured out how you might translate those concepts to us. So if you want to stick around here you're welcome to. Learn more about our perspective, become familiar with the concepts we spend time on and feel free to discuss narrower topics that come up. But people here aren't generally interested in extended debates about God with newcomers. That's why you've been down voted. Not because we're against dissent, just because we're not here to do that. There are lots of places on the internet dedicated to debating theism.
Don't mind wedrifid's tone. That's the way he is with everyone. But take his actual point seriously. Don't preach your way of thinking until you've become a lot more familiar with our way of thinking. And a new handle at some point wouldn't be a terrible idea.
Good, thank you.
However, it's important to note that I did not come in here expressly arguing my religion. I recognize how bad an idea that would be, and you've explained it well. So of course, anyone aiming to convert this lot of atheists is certainly going to fail. But that was * never * my goal, and in fact I never argued in favor of my particular God.
Look at my very first comment—it was not "this is why you are wrong," it was "do you guys have any ideas how you could be wrong?" and the response was "no, we're definitely not wrong." My first comment presented a question, albeit a difficult one.
I mentioned up front that I was religious, though, as I don't think trying to hide it would have helped anything. The community was therefore eager to argue with me, and I was happy to argue for some time. At the end, though, it was clear we simply disagreed and I said several times I wasn't interested in a full-blown debate about religion.
To summarize, you just gave a very good explanation of why I was mistaken to come on here arguing for religion. But I didn't come on here arguing for religion.
I'll tell you what made me think that: I asked the community if they had any good, non-strawman arguments for God, and the overwhelming response was "Nah, there aren't any."
I'm not sure if anyone's brought this up yet, but one of the site's best-known contributors once ran a site dedicated to these sorts of things, though it does of course have a very atheist POV. That said, even there the arguments aren't amazingly convincing (which you can guess by the fact that lukeprog hasn't reconverted yet) though it does acknowledge that the other side has some very good debaters.
I'm not sure why you think it's indicative of a problem with us that we haven't found good arguments for the existence of God. It's not a law that there be good arguments in favor of false propositions. I suppose you could make the naïve argument that if the position were as indefensible as it seems no one would believe in it, but unfortunately not many people judge arguments very rationally.
Well, if there were any that we knew of, then no one here would remain an atheist for very long. We'd all convert to whichever religion made the most sense, given the strength of its arguments. IMO you should have anticipated such a response, given that atheists do, in fact, still exist on this site.
So far, we have heard many terrible arguments for religion (we're talking logical fallacies galore), and few if any good ones. Thus, we are predisposed to thinking that the next argument for religion is going to be terrible, as well, based on past experience.
Well put. I agree with all of this, except maybe for the need for a new nick, as people who appear to learn from their experience ("update on evidence", in the awkward local parlance) are likely to be upvoted more generously.
FWIW, I neither upvoted nor downvoted your posts; I think they are typical for a newcomer to the community. However, I must admit that your closing line comes across as being very poorly thought out:
This makes it sound like your Mormonism is a foregone conclusion, and that you're going to disregard whatever evidence or argumentation comes along, unless it is compatible with Mormonism. That is not a very rational way of thinking. Then again, that's just what your closing statement sounds like, IMO; you probably did not mean it that way.
I started responding to you, but then I decided I wanted you to remain religious. For the benefit of others, here's why. (Also note that this guy is Mormon, and as far as I can tell, Mormonism is pretty great as religions go.)
When your comments get downvoted, respond by refraining from making similar comments in the future and/or abandoning the topic (this is a simple heuristics whose implementation doesn't require figuring out the reasons for downvoting). Given the current trend, if that doesn't happen, in a while your future comments will start getting banned. (You are currently at minus 128, 17% positive. This reflects the judgment of many users.)
Excuse me, but I watched my Karma drop a hundred points in three minutes. Look me in the eye and tell me that's the coincidental result of "the judgment of many users." Even if I were a brilliant, manipulative troll, I doubt I could get to -128 without someone deliberately and systematically doing so.
Someone has probably just discovered your work and found it systematically wanting. By "many users" I mean that many of the more recent comments are at minus 2-3 and there are only a few upvotes, so other people don't generally disagree.
You essentially accused the community of being ashamed of being atheist when you said:
We aren't ashamed. As Jack said to you in a parallel comment, we generally think the question is a solved problem. We aren't interested in having the same basic conversation over and over again.
Accusing us of being ashamed of the position because we don't throw our atheism in your face makes it hard to interpret the rest of your comments as saying anything beyond repeating the basic apologetics. And we've heard the basic apologetics a million times.
Once the lurkers think you aren't interesting, they'll downvote - and there are WAY more lurkers than commenters. Given that, your karma loss isn't all that surprising.
Possible, but given that all your comments are on only a small number of threads and arguing for the same basic points, it is also plausible that someone just went through those threads an downvoted most of your comments while upvoting others. I for example got about +20 karma from what as far as I can tell is primarily upvotes on my replies to you.
Discovered while researching the global effects of a Pak-Indo nuclear exchange. Once here I began to dig further and found it appealing. I am a simple soldier pushing myself into a Masters in biology. Am I rationalist? I am not sure to be honest. If I am I know the exact date and time when I started to become one. Nov 2004 I was part of the battle of Fallujah, during an exchange of gunfire a child was injured. I will never know if it was one my rounds that caused her head injury but my lips worked to bring her life again. It was a futile attempt, she passed and while clouded with this damn experience I myself was wounded. At that very moment I lost my faith in any loving deity. My endless pursuit of knowledge, to include academics provided by a brick and mortar school has helped me recover from the loss of a limb. I still have the leg however it does not function well. I like to think and philosophy fascinates me, and this site fascinates me. :) Political ideology- Fiscally Conservative Religion-possibilian Rather progressive on issues like gay marriage and abortion. Abortion actually the act I despise but as a man I feel somehow that I haven't the organs to complain. To sum me up I suppose I am a crippled, tobacco chewing, gun toting member of the Sierra Club with a future as a freshwater biologist with memories I would like to replace with Bayes. LoL Well I just spilled that mess out, might as well hit post. Please feel free to ask anything you like, I am not sensitive. Open honesty to those that are curious is good medicine.
Welcome. Hope you find what you are looking for, and maybe find some of it here.
Saluton! I'm an ex-mormon athiest, a postgenderist, a conlanging dabbler, and a chronic three-day monk.
Looking at the above posts (and a bunch of other places on the net), I think ex-mormons seem to be more common than I thought they would be. Weird.
I'm a first-year college student studying only core/LCD classes so far because every major's terrible and choosing is scary. Also, the college system is madness. I've read lots of posts on the subject of higher education on LessWrong already, and my experience with college seems to be pretty common.
I discovered LessWrong a few months ago via a link on a self-help blog, and quickly fell in love with it. The sequences pretty much completely matched up with what I had come up with on my own, and before reading LW I had never encountered anyone other than myself who regularly tabooed words and rejected the "death gives meaning to life" argument et cetera. It was nice to find out that I'm not the only sane person in the world. Of course, the less happy side of the story is that now I'm not the sanest person in my universe anymore. I'm not sure what I think about that. (Yes, having access to people that are smarter than me will probably leave me better off than before, but it's hard to turn off the "I wanna be the very best like no one ever was" desire.) Yet again, my experience seems to be pretty common.
Huh, I've never walked into a room of people and had nothing out of the ordinary to say. Being redundant is a new experience for me. I guess my secret ambition to start a movement of rationalists is redundant now too, huh? Drat! I should have come up with a plan B! :)
What will you do now that you can't form a movement of rationalists? Take over world? Become a superhero? Invent the best recipe for cookies? MAINTAIN AND INCREASE DIVERSITY?
For example, I am going to post a recipe for a bacon trilobite and my experiences and thoughts about paperclipping among humans. Any interesting things you be thinkin' of postin'? ^^
Hi! I've been lurking here for maybe 6 months, and I wanted to finally step out and say hello, and thank you! This site has helped to shape huge parts of my worldview for the better and improved my life in general to boot. I just want to make a list of a few of the things I've learned since coming here which I never would have otherwise, as nearly as I can tell.
Anyway, for all of that and more, thanks! This site has influenced me more than anything or anyone else ever has. It's really difficult to describe what it feels like to be less wrong and know exactly how and why, but I guess you guys probably know anyway.
And a few questions. First, I noticed that there's a meetup in Austin but not in the (much larger) Houston area. Is this just a lack of members in the area (this is the Bible Belt after all) or just because no one's tried to start one? Second, and there may be a thread already devoted to this somewhere, but what are some good math or computer science books I should look for? I already know the basics of calculus and I can throw my own solutions together for most harder problems but I'd like to get a stronger understanding of higher level math and computer algorithms to use it. And third, are there any other websites/ blogs (besides OB) that have a similar tone/community to this one, though perhaps on different topics, which anyone would recommend?
Anything written by Yvain, including his old and new blogs, though someone ought to compile a list of his greatest hits.
Hi Less Wrong,
My name is Sean Welsh. I am a graduate student at the University of Canterbury in Christchurch NZ. I was most recently a Solution Architect working on software development projects for telcos. I have decided to take a year off to do a Master's. My topic is Ethical Algorithms: Modelling Moral Decisions in Software. I am particularly interested in questions of machine ethics & robot ethics (obviously).
I would say at the outset that I think 'the hard problem of ethics' remains unsolved. Until it is solved, the prospects for any benign or friendly AI seem remote.
I can't honestly say that I identify as a rationalist. I think the Academy puts for too much faith in their technological marvel of 'Reason.' However, I have a healthy and robustly expressed disregard for all forms of bullshit - be they theist or atheist.
As Confucius said: Shall I teach you the meaning of knowledge? If you know a thing, to know that you know it. And if you do not know, to know that you do not know. THAT is the meaning of knowledge.
Apart from working in software development, I have also been an English teacher, a taxi driver, a tourism industry operator, online travel agent and a media adviser to a Federal politician (i.e. a spin doctor).
I don't mind a bit of biff - but generally regard it as unproductive.
Welcome!
Not sure why you link rationality with "Academy" (academia?). Consider scanning through the sequences to learn with is generally considered rationality on this forum and how Eliezer Yudkowsky treats metaethics. Whether you agree with him or not, you are likely to find a lot of insights into machine (and human) ethics, maybe even helpful in your research.
I have an interest in gaming management and practical probabilities. I have a great interest in economics as well. I stumbled onto this site and tyhe "Drawing 2 aces" post. I struggled with it for about a week, and then wrote a few things. The thread is old, but I look forward to any helpful responses.
Hi, I'm Andrew, a college undergrad in computer science. I found this site through HPMOR a few years ago.
Hi! I'm Free_NRG. I've just started a physical chemistry PhD. I found this site through a link from Leah Libresco early last year (I can't remember exactly how I found her blog). I read through the sequences as one of the distractions from too much 4th year chemistry, and particularly liked the probability theory and evolutionary theory sequences. This year, I'm trying to apply some of the productivity porn I've been reading to my life. I'm thinking of blogging about it.
I'm a college student studying music composition and computer science. You can hear some of my compositions on my SoundCloud page (it's only a small subset of my music, but I made sure to put a few that I consider my best at the top of the page). In the computer science realm, I'm into game development, so I'm participating in this thing called One Game A Month whose name should be fairly self-explanatory (my February submission is the one that's most worth checking out - the other 2 are kind of lame...).
For pretty much as long as I can remember, I've enjoyed pondering difficult/philosophical/confusing questions and not running away from them, which, along with having parents well-versed in math and science, led me to gradually hone my rationality skills over a long period of time without really having a particular moment of "Aha, now I'm a rationalist!". I suppose the closest thing to such a moment would be about a year ago when I discovered HPMoR (and, shortly thereafter, this site). I've found LW to be pretty much the only place where I am consistently less confused after reading articles about difficult/philosophical/confusing questions than I am before.
Welcome!
Have you done any algorithmic composition?
I did this and I might try doing a few more pieces like it. You have to click somewhere on the screen to start/stop it.