How I Ended Up Non-Ambitious

113 Swimmer963 23 January 2012 11:50PM

I have a confession to make. My life hasn’t changed all that much since I started reading Less Wrong. Hindsight bias makes it hard to tell, I guess, but I feel like pretty much the same person, or at least the person I would have evolved towards anyway, whether or not I spent those years reading about the Art of rationality.

But I can’t claim to be upset about it either. I can’t say that rationality has undershot my expectations. I didn’t come to Less Wrong expecting, or even wanting, to become the next Bill Gates; I came because I enjoyed reading it, just like I’ve enjoyed reading hundreds of books and websites. 

In fact, I can’t claim that I would want my life to be any different. I have goals and I’m meeting them: my grades are good, my social skills are slowly but steadily improving, I get along well with my family, my friends, and my boyfriend. I’m in good shape financially despite making $12 an hour as a lifeguard, and in a year and a half I’ll be making over $50,000 a year as a registered nurse. I write stories, I sing in church, I teach kids how to swim. Compared to many people my age, I'm pretty successful. In general, I’m pretty happy.

Yvain suggested akrasia as a major limiting factor for why rationalists fail to have extraordinarily successful lives. Maybe that’s true for some people; maybe they are some readers and posters on LW who have big, exciting, challenging goals that they consistently fail to reach because they lack motivation and procrastinate. But that isn’t true for me. Though I can’t claim to be totally free of akrasia, it hasn’t gotten much in the way of my goals. 

However, there are some assumptions that go too deep to be accessed by introspection, or even by LW meetup discussions. Sometimes you don't even realize they’re assumptions until you meet someone who assumes the opposite, and try to figure out why they make you so defensive. At the community meetup I described in my last post, a number of people asked me why I wasn’t studying physics, since I was obviously passionate about it. Trust me, I had plenty of good justifications for them–it’s a question I’ve been asked many times–but the question itself shouldn’t have made me feel attacked, and it did.

Aside from people in my life, there are some posts on Less Wrong that cause the same reaction of defensiveness. Eliezer’s Mandatory Secret Identities is a good example; my automatic reaction was “well, why do you assume everyone here wants to have a super cool, interesting life? In fact, why do you assume everyone wants to be a rationality instructor? I don’t. I want to be a nurse.”

After a bit of thought, I’ve concluded that there’s a simple reason why I’ve achieved all my life goals so far (and why learning about rationality failed to affect my achievements): they’re not hard goals. I’m not ambitious. As far as I can tell, not being ambitious is such a deep part of my identity that I never even noticed it, though I’ve used the underlying assumptions as arguments for why my goals and life decisions were the right ones.

continue reading »

So You Want to Save the World

41 lukeprog 01 January 2012 07:39AM

This post is very out-of-date. See MIRI's research page for the current research agenda.

So you want to save the world. As it turns out, the world cannot be saved by caped crusaders with great strength and the power of flight. No, the world must be saved by mathematicians, computer scientists, and philosophers.

This is because the creation of machine superintelligence this century will determine the future of our planet, and in order for this "technological Singularity" to go well for us, we need to solve a particular set of technical problems in mathematics, computer science, and philosophy before the Singularity happens.

The best way for most people to save the world is to donate to an organization working to solve these problems, an organization like the Singularity Institute or the Future of Humanity Institute.

Don't underestimate the importance of donation. You can do more good as a philanthropic banker than as a charity worker or researcher.

But if you are a capable researcher, then you may also be able to contribute by working directly on one or more of the open problems humanity needs to solve. If so, read on...

continue reading »

Prediction is hard, especially of medicine

47 gwern 23 December 2011 08:34PM

Summary: medical progress has been much slower than even recently predicted.

In the February and March 1988 issues of Cryonics, Mike Darwin (Wikipedia/LessWrong) and Steve Harris published a two-part article “The Future of Medicine” attempting to forecast the medical state of the art for 2008. Darwin has republished it on the New_Cryonet email list.

Darwin is a pretty savvy forecaster (who you will remember correctly predicting in 1981 in “The High Cost of Cryonics”/part 2 ALCOR’s recent troubles with grandfathering), so given my standing interests in tracking predictions, I read it with great interest; but they still blew most of them, and not the ones we would prefer them to’ve.

The full essay is ~10k words, so I will excerpt roughly half of it below; feel free to skip to the reactions section and other links.

continue reading »

Compressing Reality to Math

20 Vaniver 15 December 2011 12:07AM

This is part of a sequence on decision analysis and follows 5 Axioms of Decision-Making, which explains how to turn a well-formed problem into a solution. Here we discuss turning reality into a well-formed problem. There are three basic actions I'd like to introduce, and then work through some examples.

continue reading »

Drawing Less Wrong: Technical Skill

26 Raemon 05 December 2011 05:12AM

This is the fifth post of the Drawing Less Wrong mini sequence, in which I discuss how to draw, how learning to draw *effectively* relates to rationality, and what the initial results were when I started running a drawing workshop, teaching people with essentially no experience.

Information here is a combination of lessons I've learned from numerous art teachers who all agree with each other, and some of my own observations that I'm pretty confident about. When I talk about "how the brain does things" I'm using a mix of folk psychology and guesses based on my limited knowledge of neuroscience, which may not be technically accurate but should be sufficient to make useful predictions.
 
Previous posts include the Introduction, "Should you Learn to Draw?", "An Overview of Skills", and "Observing Reality."



Technique

The ability to observe is probably at least 2/3rds of what separates non-artists from amateur artists. But those 2/3rds are near-useless without the ability to move your pencil the way your eyes want to it to go. And once you've transitioned into an amateur artist, around 9,000 hours of honing your technical skill is what separates you from a professional.

"Technical Skill" is a broad term - kind of a catch all for all term for various motor skills you'll need to develop, background knowledge about how particular types of lines and shapes are perceived by most humans, and how to combine those skills and knowledge to produce particular effects with your drawing.

I can't even begin to cover all of it, and most of it isn't really appropriate for Less Wrong. But I will talk about some key motor skills that tie in with the next article, and a significant bias that plays a role in them.

This article was challenging to write - distilling a kinesthetic process into written words is difficult. This article will not be a substitute for having a teacher and a model, nor will it tell you exactly what exercises to do. But it will try to lay down some concepts that I'll further expound on later.

continue reading »

Funnel plots: the study that didn't bark, or, visualizing regression to the null

47 gwern 04 December 2011 11:05AM

Marginal Revolution linked a post at Genomes Unzipped, "Size matters, and other lessons from medical genetics", with the interesting centerpiece graph:

a funnel plot of genetic studies showing null result approached as sample size increasese

This is from pg 3 of an Ioannidis 2001 et al article (who else?) on what is called a funnel plot: each line represents a series of studies about some particularly hot gene-disease correlations, plotted where Y =  the odds ratio (measure of effect size; all results are 'statistically significant', of course) and X = the sample size. The 1 line is the null hypothesis, here. You will notice something dramatic: as we move along the X-axis and sample sizes increase, everything begins to converge on 1:

Readers familiar with the history of medical association studies will be unsurprised by what happened over the next few years: initial excitement (this same polymorphism was associated with diabetes! And longevity!) was followed by inconclusive replication studies and, ultimately, disappointment. In 2000, 8 years after the initial report, a large study involving over 5,000 cases and controls found absolutely no detectable effect of the ACE polymorphism on heart attack risk. In the meantime, the same polymorphism had turned up in dozens of other association studies for a wide range of traits ranging from obstet­ric cholestasis to menin­go­­coccal disease in children, virtually none of which have ever been convincingly replicated.

(See also "Why epidemiology will not correct itself" or the DNB FAQ.)

continue reading »

Objections to Coherent Extrapolated Volition

11 XiXiDu 22 November 2011 10:32AM

In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.

— Eliezer Yudkowsky, May 2004, Coherent Extrapolated Volition

Foragers versus industry era folks

Consider the difference between a hunter-gatherer, who cares about his hunting success and to become the new tribal chief, and a modern computer scientist who wants to determine if a “sufficiently large randomized Conway board could turn out to converge to a barren ‘all off’ state.”

The utility of the success in hunting down animals and proving abstract conjectures about cellular automata is largely determined by factors such as your education, culture and environmental circumstances. The same forager who cared to kill a lot of animals, to get the best ladies in its clan, might have under different circumstances turned out to be a vegetarian mathematician solely caring about his understanding of the nature of reality. Both sets of values are to some extent mutually exclusive or at least disjoint. Yet both sets of values are what the person wants, given the circumstances. Change the circumstances dramatically and you change the persons values.

What do you really want?

You might conclude that what the hunter-gatherer really wants is to solve abstract mathematical problems, he just doesn’t know it. But there is no set of values that a person “really” wants. Humans are largely defined by the circumstances they reside in.

  • If you already knew a movie, you wouldn’t watch it.
  • To be able to get your meat from the supermarket changes the value of hunting.

If “we knew more, thought faster, were more the people we wished we were, and had grown up closer together” then we would stop to desire what we learnt, wish to think even faster, become even different people and get bored of and rise up from the people similar to us.

A singleton is an attractor

A singleton will inevitably change everything by causing a feedback loop between itself as an attractor and humans and their values.

Much of our values and goals, what we want, are culturally induced or the result of our ignorance. Reduce our ignorance and you change our values. One trivial example is our intellectual curiosity. If we don’t need to figure out what we want on our own, our curiosity is impaired.

A singleton won’t extrapolate human volition but implement an artificial set values as a result of abstract high-order contemplations about rational conduct.

With knowledge comes responsibility, with wisdom comes sorrow

Knowledge changes and introduces terminal goals. The toolkit that is called ‘rationality’, the rules and heuristics developed to help us to achieve our terminal goals are also altering and deleting them. A stone age hunter-gatherer seems to possess very different values than we do. Learning about rationality and various ethical theories such as Utilitarianism would alter those values considerably.

Rationality was meant to help us achieve our goals, e.g. become a better hunter. Rationality was designed to tell us what we ought to do (instrumental goals) to achieve what we want to do (terminal goals). Yet what actually happens is that we are told, that we will learn, what we ought to want.

If an agent becomes more knowledgeable and smarter then this does not leave its goal-reward-system intact if it is not especially designed to be stable. An agent who originally wanted to become a better hunter and feed his tribe would end up wanting to eliminate poverty in Obscureistan. The question is, how much of this new “wanting” is the result of using rationality to achieve terminal goals and how much is a side-effect of using rationality, how much is left of the original values versus the values induced by a feedback loop between the toolkit and its user?

Take for example an agent that is facing the Prisoner’s dilemma. Such an agent might originally tend to cooperate and only after learning about game theory decide to defect and gain a greater payoff. Was it rational for the agent to learn about game theory, in the sense that it helped the agent to achieve its goal or in the sense that it deleted one of its goals in exchange for a allegedly more “valuable” goal?

Beware rationality as a purpose in and of itself

It seems to me that becoming more knowledgeable and smarter is gradually altering our utility functions. But what is it that we are approaching if the extrapolation of our volition becomes a purpose in and of itself? Extrapolating our coherent volition will distort or alter what we really value by installing a new cognitive toolkit designed to achieve an equilibrium between us and other agents with the same toolkit.

Would a singleton be a tool that we can use to get what we want or would the tool use us to do what it does, would we be modeled or would it create models, would we be extrapolating our volition or rather follow our extrapolations?

(This post is a write-up of a previous comment designated to receive feedback from a larger audience.)

Drawing Less Wrong: An Introduction

33 Raemon 13 November 2011 10:39PM

This post begins a mini-sequence that discusses how to draw, reports on an experiment about teaching people how to draw, and examines how rationality and good drawing practices are related. (As it turns out, a fair amount)

continue reading »

2011 Less Wrong Census / Survey

77 Yvain 01 November 2011 06:28PM

The final straw was noticing a comment referring to "the most recent survey I know of" and realizing it was from May 2009. I think it is well past time for another survey, so here is one now.

Click here to take the survey

I've tried to keep the structure of the last survey intact so it will be easy to compare results and see changes over time, but there were a few problems with the last survey that required changes, and a few questions from the last survey that just didn't apply as much anymore (how many people have strong feelings on Three Worlds Collide these days?)

Please try to give serious answers that are easy to process by computer (see the introduction). And please let me know as soon as possible if there are any security problems (people other than me who can access the data) or any absolutely awful questions.

I will probably run the survey for about a month unless new people stop responding well before that. Like the last survey, I'll try to calculate some results myself and release the raw data (minus the people who want to keep theirs private) for anyone else who wants to examine it.

Like the last survey, if you take it and post that you took it here, I will upvote you, and I hope other people will upvote you too.

A few analogies to illustrate key rationality points

50 kilobug 09 October 2011 01:00PM

Introduction

Due to long inferential distances it's often very difficult to use knowledge or understanding given by rationality in a discussion with someone who isn't versed in the Art (like, a poor folk who didn't read the Sequences, or maybe even not the Goedel, Escher, Bach !). So I find myself often forced to use analogies, that will necessary be more-or-less surface analogies, which don't prove anything nor give any technical understanding, but allow someone to have a grasp on a complicated issue in a few minutes.

A tale of chess and politics

Once upon a time, a boat sank and a group of people found themselves isolated in an island. None of them knew the rules of the game "chess", but there was a solar-powered portable chess computer on the boat. A very simple one, with no AI, but which would enforce the rules. Quickly, the survivors discovered the joy of chess, deducing the rules by trying moves, and seeing the computer saying "illegal move" or "legal move", seeing it proclaiming victory, defeat or draw game.

So they learned the rules of chess, movement of the pieces, what "chess" and "chessmate" is, how you can promote pawns, ... And they understood the planning and strategy skills required to win the game. So chess became linked to politics, it was the Game, with a capital letter, and every year, they would organize a chess tournament, and the winner, the smartest of the community, would become the leader for one year.

One sunny day, a young fellow named Hari playing with his brother Salvor (yes, I'm an Asimov fan), discovered a new move of chess : he discovered he could castle. In one move, he could liberate his rook, and protect his king. They kept the discovery secret, and used it on the tournament. Winning his games, Hari became the leader.

Soon after, people started to use the power of castling as much as they could. They even sacrificed pieces, even their queen, just to be able to castle fast. But everyone was trying to castle as fast as they could, and they were losing sight of the final goal : winning, for the intermediate goal : castling.

continue reading »

View more: Prev | Next