You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

[Link] Putanumonit - Convincing people to read the Sequences and wondering about "postrationalists"

10 Jacobian 28 September 2016 04:43PM

Leaving LessWrong for a more rational life

33 [deleted] 21 May 2015 07:24PM

You are unlikely to see me posting here again, after today. There is a saying here that politics is the mind-killer. My heretical realization lately is that philosophy, as generally practiced, can also be mind-killing.

As many of you know I am, or was running a twice-monthly Rationality: AI to Zombies reading group. One of the bits I desired to include in each reading group post was a collection of contrasting views. To research such views I've found myself listening during my commute to talks given by other thinkers in the field, e.g. Nick Bostrom, Anders Sandberg, and Ray Kurzweil, and people I feel are doing “ideologically aligned” work, like Aubrey de Grey, Christine Peterson, and Robert Freitas. Some of these were talks I had seen before, or generally views I had been exposed to in the past. But looking through the lens of learning and applying rationality, I came to a surprising (to me) conclusion: it was philosophical thinkers that demonstrated the largest and most costly mistakes. On the other hand, de Grey and others who are primarily working on the scientific and/or engineering challenges of singularity and transhumanist technologies were far less likely to subject themselves to epistematic mistakes of significant consequences.

Philosophy as the anti-science...

What sort of mistakes? Most often reasoning by analogy. To cite a specific example, one of the core underlying assumption of singularity interpretation of super-intelligence is that just as a chimpanzee would be unable to predict what a human intelligence would do or how we would make decisions (aside: how would we know? Were any chimps consulted?), we would be equally inept in the face of a super-intelligence. This argument is, however, nonsense. The human capacity for abstract reasoning over mathematical models is in principle a fully general intelligent behaviour, as the scientific revolution has shown: there is no aspect of the natural world which has remained beyond the reach of human understanding, once a sufficient amount of evidence is available. The wave-particle duality of quantum physics, or the 11-dimensional space of string theory may defy human intuition, i.e. our built-in intelligence. But we have proven ourselves perfectly capable of understanding the logical implications of models which employ them. We may not be able to build intuition for how a super-intelligence thinks. Maybe—that's not proven either. But even if that is so, we will be able to reason about its intelligent behaviour in advance, just like string theorists are able to reason about 11-dimensional space-time without using their evolutionarily derived intuitions at all.

This post is not about the singularity nature of super-intelligence—that was merely my choice of an illustrative example of a category of mistakes that are too often made by those with a philosophical background rather than the empirical sciences: the reasoning by analogy instead of the building and analyzing of predictive models. The fundamental mistake here is that reasoning by analogy is not in itself a sufficient explanation for a natural phenomenon, because it says nothing about the context sensitivity or insensitivity of the original example and under what conditions it may or may not hold true in a different situation.

A successful physicist or biologist or computer engineer would have approached the problem differently. A core part of being successful in these areas is knowing when it is that you have insufficient information to draw conclusions. If you don't know what you don't know, then you can't know when you might be wrong. To be an effective rationalist, it is often not important to answer “what is the calculated probability of that outcome?” The better first question is “what is the uncertainty in my calculated probability of that outcome?” If the uncertainty is too high, then the data supports no conclusions. And the way you reduce uncertainty is that you build models for the domain in question and empirically test them.

The lens that sees its own flaws...

Coming back to LessWrong and the sequences. In the preface to Rationality, Eliezer Yudkowsky says his biggest regret is that he did not make the material in the sequences more practical. The problem is in fact deeper than that. The art of rationality is the art of truth seeking, and empiricism is part and parcel essential to truth seeking. There's lip service done to empiricism throughout, but in all the “applied” sequences relating to quantum physics and artificial intelligence it appears to be forgotten. We get instead definitive conclusions drawn from thought experiments only. It is perhaps not surprising that these sequences seem the most controversial.

I have for a long time been concerned that those sequences in particular promote some ungrounded conclusions. I had thought that while annoying this was perhaps a one-off mistake that was fixable. Recently I have realized that the underlying cause runs much deeper: what is taught by the sequences is a form of flawed truth-seeking (thought experiments favored over real world experiments) which inevitably results in errors, and the errors I take issue with in the sequences are merely examples of this phenomenon.

And these errors have consequences. Every single day, 100,000 people die of preventable causes, and every day we continue to risk extinction of the human race at unacceptably high odds. There is work that could be done now to alleviate both of these issues. But within the LessWrong community there is actually outright hostility to work that has a reasonable chance of alleviating suffering (e.g. artificial general intelligence applied to molecular manufacturing and life-science research) due to concerns arrived at by flawed reasoning.

I now regard the sequences as a memetic hazard, one which may at the end of the day be doing more harm than good. One should work to develop one's own rationality, but I now fear that the approach taken by the LessWrong community as a continuation of the sequences may result in more harm than good. The anti-humanitarian behaviors I observe in this community are not the result of initial conditions but the process itself.

What next?

How do we fix this? I don't know. On a personal level, I am no longer sure engagement with such a community is a net benefit. I expect this to be my last post to LessWrong. It may happen that I check back in from time to time, but for the most part I intend to try not to. I wish you all the best.

A note about effective altruism…

One shining light of goodness in this community is the focus on effective altruism—doing the most good to the most people as measured by some objective means. This is a noble goal, and the correct goal for a rationalist who wants to contribute to charity. Unfortunately it too has been poisoned by incorrect modes of thought.

Existential risk reduction, the argument goes, trumps all forms of charitable work because reducing the chance of extinction by even a small amount has far more expected utility than would accomplishing all other charitable works combined. The problem lies in the likelihood of extinction, and the actions selected in reducing existential risk. There is so much uncertainty regarding what we know, and so much uncertainty regarding what we don't know that it is impossible to determine with any accuracy the expected risk of, say, unfriendly artificial intelligence creating perpetual suboptimal outcomes, or what effect charitable work in the area (e.g. MIRI) is have to reduce that risk, if any.

This is best explored by an example of existential risk done right. Asteroid and cometary impacts is perhaps the category of external (not-human-caused) existential risk which we know the most about, and have done the most to mitigate. When it was recognized that impactors were a risk to be taken seriously, we recognized what we did not know about the phenomenon: what were the orbits and masses of Earth-crossing asteroids? We built telescopes to find out. What is the material composition of these objects? We built space probes and collected meteorite samples to find out. How damaging an impact would there be for various material properties, speeds, and incidence angles? We built high-speed projectile test ranges to find out. What could be done to change the course of an asteroid found to be on collision course? We have executed at least one impact probe and will monitor the effect that had on the comet's orbit, and have on the drawing board probes that will use gravitational mechanisms to move their target. In short, we identified what it is that we don't know and sought to resolve those uncertainties.

How then might one approach an existential risk like unfriendly artificial intelligence? By identifying what it is we don't know about the phenomenon, and seeking to experimentally resolve that uncertainty. What relevant facts do we not know about (unfriendly) artificial intelligence? Well, much of our uncertainty about the actions of an unfriendly AI could be resolved if we were to know more about how such agents construct their thought models, and relatedly what language were used to construct their goal systems. We could also stand to benefit from knowing more practical information (experimental data) about in what ways AI boxing works and in what ways it does not, and how much that is dependent on the structure of the AI itself. Thankfully there is an institution that is doing that kind of work: the Future of Life institute (not MIRI).

Where should I send my charitable donations?

Aubrey de Grey's SENS Research Foundation.

100% of my charitable donations are going to SENS. Why they do not get more play in the effective altruism community is beyond me.

If you feel you want to spread your money around, here are some non-profits which have I have vetted for doing reliable, evidence-based work on singularity technologies and existential risk:

  • Robert Freitas and Ralph Merkle's Institute for Molecular Manufacturing does research on molecular nanotechnology. They are the only group that work on the long-term Drexlarian vision of molecular machines, and publish their research online.
  • Future of Life Institute is the only existential-risk AI organization which is actually doing meaningful evidence-based research into artificial intelligence.
  • B612 Foundation is a non-profit seeking to launch a spacecraft with the capability to detect, to the extent possible, ALL Earth-crossing asteroids.

I wish I could recommend a skepticism, empiricism, and rationality promoting institute. Unfortunately I am not aware of an organization which does not suffer from the flaws I identified above.

Addendum regarding unfinished business

I will no longer be running the Rationality: From AI to Zombies reading group as I am no longer in good conscience able or willing to host it, or participate in this site, even from my typically contrarian point of view. Nevertheless, I am enough of a libertarian that I feel it is not my role to put up roadblocks to others who wish to delve into the material as it is presented. So if someone wants to take over the role of organizing these reading groups, I would be happy to hand over the reigns to that person. If you think that person should be you, please leave a reply in another thread, not here.

EDIT: Obviously I'll stick around long enough to answer questions below :)

Rationality Reading Group: Fake Beliefs (p43-77)

9 [deleted] 07 May 2015 09:07AM

This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.


Welcome to the Rationality reading group. This week we discuss the sequence Fake Beliefs which introduces the concept of belief in belief and demonstrates the phenomenon in a number of contexts, most notably as it relates to religion. This sequence also foreshadows the mind-killing effects of tribalism and politics, introducing some of the language (e.g. Green vs. Blue) which will be used later.

This post summarizes each article of the sequence, linking to the original LessWrong posting where available, and offers a few relevant notes, thoughts, and ideas for further investigation. My own thoughts and questions for discussion are in the comments.

Reading: Sequence B: Fake Beliefs (p43-77)


B. Fake Beliefs

11. Making beliefs pay rent (in anticipated experiences). Beliefs networks which have no connection to anticipated experience we call “floating” beliefs. Floating beliefs provide no benefit as they do not constrain predictions in any way. Ask about a belief what you expect to see, if the belief is true. Or better yet what you expect not to see: what evidence would falsify the belief. Every belief should flow to a specific guess of anticipation, and should continue to pay rent in future anticipations. If a belief turns deadbeat, evict it. (p45-48)

12. A fable of science and politics. Cautions, though a narrative story, the dangers of that come from feeling attachment to beliefs. Introduces the Greens vs Blues, a fictional debate illustrating the biases which emerge from the tribalism of group politics. (p49-53)

13. Belief in belief. Through the story of someone who claims a dragon lives in their garage, a invisible, inaudible, impermeable dragon which defies all attempts at detection, we are introduced to the concept of belief in belief. The dragon claimant believes that there is a fire-breathing flying animal in his garage, but simultaneously expects to make no observations that would confirm that belief. The belief in belief turns into a form of mental jujutsu where mental models are transfigured in the face of experiment so as to predict whatever would be expected if the belief were not, in fact, true. (p54-58)

14. Bayesian judo. A humorous story illustrating the inconsistency of belief in belief, and the mental jujutsu required to maintain such beliefs. (p59-60)

15. Pretending to be wise. There's a difference between: (1) passing neutral judgment; (2) declining to invest marginal resources in investigating the sides of a debate; and (3) pretending that either of the above is a mark of deep wisdom, maturity, and a superior vantage point. Propounding neutrality is just as attackable as propounding any particular side. (p61-64)

16. Religion's claim to be non-disprovable. It is only a recent development in Western thought that religion is something which cannot be proven or disproven. Many examples are provided of falsifiable beliefs which were once the domain of religion. (p65-68)

17. Professing and cheering. Much of modern religion can be thought of as communal profession of belief – actions and words which signal your belief to others. (p69-71)

18. Belief as attire. It is very easy for a human being to genuinely, passionately, gut-level belong to a group. Identifying with a tribe is a very strong emotional force. And once you get people to identify with a tribe, the beliefs which are attire of that tribe will be spoken with the full passion of belonging to that tribe. (p72-73)

19. Applause lights. Sometimes statements are made in the form of proposals when themselves present no meaningful suggestion, e.g. “We need to balance the risks and opportunities of AI.” It's not so much a propositional statement, as the equivalent of the “Applause” light that tells a studio audience when to clap. Most applause lights can be detected by a simple reversal test: “We shouldn't balance the risks and opportunities of AI.” Since the reversal sounds abnormal, the unreversed statement is probably normal, implying it does not convey new information. (p74-77)

 


This has been a collection of notes on the assigned sequence for this week. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover Sequence C: Noticing Confusion (p79-114). The discussion will go live on Wednesday, 20 May 2015 at or around 6pm PDT (hopefully), right here on the discussion forum of LessWrong.

Rationality Reading Group: Introduction and A: Predictably Wrong

11 [deleted] 17 April 2015 01:40AM

This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.


Welcome to the Rationality reading group. This week we discuss the Preface by primary author Eliezer Yudkowsky, Introduction by editor & co-author Rob Bensinger, and the first sequence: Predictably Wrong. This sequence introduces the methods of rationality, including its two major applications: the search for truth and the art of winning. The desire to seek truth is motivated, and a few obstacles to seeking truth--systematic errors, or biases--are discussed in detail.

This post summarizes each article of the sequence, linking to the original LessWrong posting where available, and offers a few relevant notes, thoughts, and ideas for further investigation. My own thoughts and questions for discussion are in the comments.

Reading: Preface, Biases: An Introduction, and Sequence A: Predictably Wrong (pi-xxxv and p1-42)


Introduction

Preface. Introduction to the ebook compilation by Eliezer Yudkowsky. Retrospectively identifies mistakes of the text as originally presented. Some have been corrected in the ebook, others stand as-is. Most notably the book focuses too much on belief, and too little on practical actions, especially with respect to our everyday lives. Establishes that the goal of the project is to teach rationality, those ways of thinking which are common among practicing scientists and the foundation of the Enlightenment, yet not systematically organized or taught in schools (yet).

Biases: An Introduction. Editor & co-author Rob Bensinger motivates the subject of rationality by explaining the dangers of systematic errors caused by *cognitive biases*, which the arts of rationality are intended to de-bias. Rationality is not about Spock-like stoicism -- it is about simply "doing the best you can with what you've got." The System 1 / System 2 dual process dichotomy is explained: if our errors are systematic and predictable, then we can instil behaviors and habits to correct them. A number of exemplar biases are presented. However a warning: it is difficult to recognize biases in your own thinking even after learning of them, and knowing about a bias may grant unjustified overconfidence that you yourself do not fall pray to such mistakes in your thinking. To develop as a rationalist actual experience is required, not just learned expertise / knowledge. Ends with an introduction of the editor and an overview of the organization of the book.

A. Predictably Wrong

1. What do I mean by "rationality"? Rationality is a systematic means of forming true beliefs and making winning decisions. Probability theory is the set of laws underlying rational belief, "epistemic rationality": it describes how to process evidence and observations to revise ("update") one's beliefs. Decision theory is the set of laws underlying rational action, "instrumental rationality", independent of what one's goals and available options are. (p7-11)

2. Feeling rational. Becoming more rational can diminish feelings or intensify them. If one cares about the state of the world, it is expected that he or she should have an emotional response to the acquisition of truth. "That which can be destroyed by the truth should be," but also "that which the truth nourishes should thrive." The commonly perceived dichotomy between emotions and "rationality" [sic] is more often about fast perceptual judgements (System 1, emotional) vs slow deliberative judgements (System 2, "rational" [sic]). But both systems can serve the goal of truth, or defeat it, depending on how they are used. (p12-14)

3. Why truth? and... Why seek the truth? Curiosity: to satisfy an emotional need to know. Pragmatism: to accomplish some specific real-world goal. Morality: to be virtuous, or fulfill a duty to truth. Curiosity motivates a search for the most intriguing truths, pragmatism the most useful, and morality the most important. But be wary of the moral justification: "To make rationality into a moral duty is to give it all the dreadful degrees of freedom of an arbitrary tribal custom. People arrive at the wrong answer, and then indignantly protest that they acted with propriety, rather than learning from their mistake." (p15-18)

4. ...what's a bias, again? A bias is an obstacle to truth, specifically those obstacles which are produced by our own thinking processes. We describe biases as failure modes which systematically prevent typical human beings from determining truth or selecting actions that would have best achieved their goals. Biases are distinguished from mistakes which originate from false beliefs or brain injury. Do better seek truth and achieve our goals we must identify our biases and do what we can to correct for or eliminate them. (p19-22)

5. Availability. The availability heuristic is judging the frequency or probability of an event by the ease with which examples of the event come to mind. If you think you've heard about murders twice as much as suicides then you might suppose that murder is twice as common as suicide, when in fact the opposite is true. Use of the availability heuristic gives rise to the absurdity bias: events that have never happened are not recalled, and hence deemed to have no probability of occurring. In general, memory is not always a good guide to probabilities in the past, let alone to the future. (p23-25)

6. Burdensome details. The conjunction fallacy is when humans rate the probability of two events has higher than the probability of either event alone: adding detail can make a scenario sound more plausible, even though the event as described necessarily becomes less probable. Possible fixes include training yourself to notice the addition of details and discount appropriately, thinking about other reasons why the central idea could be true other than the added detail, or training oneself to hold a preference for simpler explanations -- to feel every added detail as a burden. (p26-29)

7. Planning fallacy. The planning fallacy is the mistaken belief that human beings are capable of making accurate plans. The source of the error is that we tend to imagine how things will turn out if everything goes according to plan, and do not appropriately account for possible troubles or difficulties along the way. The typically adequate solution is to compare the new project to broadly similar previous projects undertaken in the past, and ask how long those took to complete. (p30-33)

8. Illusion of transparency: why no one understands you. The illusion of transparency is our bias to assume that others will understand the intent behind our attempts to communicate. The source of the error is that we do not sufficiently consider alternative frames of mind or personal histories, which might lead the recipient to alternative interpretations. Be not too quick to blame those who misunderstand your perfectly clear sentences, spoken or written. Chances are, your words are more ambiguous than you think. (p34-36)

9. Expecting short inferential distances. Human beings are generally capable of processing only one piece of new information at at time. Worse, someone who says something with no obvious support is a liar or an idiot, and if you say something blatantly obvious and the other person doesn't see it, they're the idiot. This is our bias towards explanations of short inferential distance. A clear argument has to lay out an inferential pathway, starting from what the audience already knows or accepts. If at any point you make a statement without obvious justification in arguments you've previously supported, the audience just thinks you're crazy. (p37-39)

10. The lens that sees its own flaws. We humans have the ability to introspect our own thinking processes, a seemingly unique skill among life on Earth. As consequence, a human brain is able to understand its own flaws--its systematic errors, its biases--and apply second-order corrections to them. (p40-42)


It is at this point that I would generally like to present an opposing viewpoint. However I must say that this first introductory sequence is not very controversial! Educational, yes, but not controversial. If anyone can provide a link or citation to one or more decent non-strawman arguments which oppose any of the ideas of this introduction and first sequence, please do so in the comments. I certainly encourage awarding karma to anyone that can do a reasonable job steel-manning an opposing viewpoint.


This has been a collection of notes on the assigned sequence for this week. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover Sequence B: Fake Beliefs (p43-77). The discussion will go live on Wednesday, 6 May 2015 at or around 6pm PDT, right here on the discussion forum of LessWrong.

Rationality: From AI to Zombies online reading group

36 [deleted] 21 March 2015 09:54AM

Update: When I posted this announcement I remarkably failed to make the connection that the April 15th is tax day here in the US, and as a prime example of the planning fallacy (a topic of the first sequence!), I failed to anticipate just how complicated my taxes would be this year. The first post of the reading group is basically done but a little rushed, and I want to take an extra day to get it right. Expect it to post on the next day, the 16th

 

On Thursday, 16 April 2015, just under a month out from this posting, I will hold the first session of an online reading group for the ebook Rationality: From AI to Zombies, a compilation of the LessWrong sequences by our own Eliezer Yudkowsky. I would like to model this on the very successful Superintelligence reading group led by KatjaGrace. This is an advanced warning, so that you can have a chance to get the ebook, make a donation to MIRI, and read the first sequence.

The point of this online reading group is to join with others to ask questions, discuss ideas, and probe the arguments more deeply. It is intended to add to the experience of reading the sequences in their new format or for the first time. It is intended to supplement discussion that has already occurred the original postings and the sequence reruns.

The reading group will 'meet' on a semi-monthly post on the LessWrong discussion forum. For each 'meeting' we will read one sequence from the the Rationality book, which contains a total of 26 lettered sequences. A few of the sequences are unusually long, and these might be split into two sessions. If so, advance warning will be given.

In each posting I will briefly summarize the salient points of the essays comprising the sequence, link to the original articles and discussion when possible, attempt to find, link to, and quote one or more related materials or opposing viewpoints from outside the text, and present a half-dozen or so question prompts to get the conversation rolling. Discussion will take place in the comments. Others are encouraged to provide their own question prompts or unprompted commentary as well.

We welcome both newcomers and veterans on the topic. If you've never read the sequences, this is a great opportunity to do so. If you are an old timer from the Overcoming Bias days then this is a chance to share your wisdom and perhaps revisit the material with fresh eyes. All levels of time commitment are welcome.

If this sounds like something you want to participate in, then please grab a copy of the book and get started reading the preface, introduction, and the 10 essays / 42 pages which comprise Part A: Predictably Wrong. The first virtual meeting (forum post) covering this material will go live before 6pm Thursday PDT (1am Friday UTC), 16 April 2015. Successive meetings will start no later than 6pm PDT on the first and third Wednesdays of a month.

Following this schedule it is expected that it will take just over a year to complete the entire book. If you prefer flexibility, come by any time! And if you are coming upon this post from the future, please feel free leave your opinions as well. The discussion period never closes.

Topic for the first week is the preface by Eliezer Yudkowsky, the introduction by Rob Bensinger, and Part A: Predictably Wrong, a sequence covering rationality, the search for truth, and a handful of biases.

Announcement: The Sequences eBook will be released in mid-March

47 RobbBB 03 March 2015 01:58AM

The Sequences are being released as an eBook, titled Rationality: From AI to Zombies, on March 12.

We went with the name "Rationality: From AI to Zombies" (based on shminux's suggestion) to make it clearer to people — who might otherwise be expecting a self-help book, or an academic text — that the style and contents of the Sequences are rather unusual. We want to filter for readers who have a wide-ranging interest in (/ tolerance for) weird intellectual topics. Alternative options tended to obscure what the book is about, or obscure its breadth / eclecticism.

 

The book's contents

Around 340 of Eliezer's essays from 2009 and earlier will be included, collected into twenty-six sections ("sequences"), compiled into six books:

  1. Map and Territory: sequences on the Bayesian conceptions of rationality, belief, evidence, and explanation.
  2. How to Actually Change Your Mind: sequences on confirmation bias and motivated reasoning.
  3. The Machine in the Ghost: sequences on optimization processes, cognition, and concepts.
  4. Mere Reality: sequences on science and the physical world.
  5. Mere Goodness: sequences on human values.
  6. Becoming Stronger: sequences on self-improvement and group rationality.

The six books will be released as a single sprawling eBook, making it easy to hop back and forth between different parts of the book. The whole book will be about 1,800 pages long. However, we'll also be releasing the same content as a series of six print books (and as six audio books) at a future date.

The Sequences have been tidied up in a number of small ways, but the content is mostly unchanged. The largest change is to how the content is organized. Some important Overcoming Bias and Less Wrong posts that were never officially sorted into sequences have now been added — 58 additions in all, forming four entirely new sequences (and also supplementing some existing sequences). Other posts have been removed — 105 in total. The following old sequences will be the most heavily affected:

  • Map and Territory and Mysterious Answers to Mysterious Questions are being merged, expanded, and reassembled into a new set of introductory sequences, with more focus placed on cognitive biases. The name 'Map and Territory' will be re-applied to this entire collection of sequences, constituting the first book.
  • Quantum Physics and Metaethics are being heavily reordered and heavily shortened.
  • Most of Fun Theory and Ethical Injunctions is being left out. Taking their place will be two new sequences on ethics, plus the modified version of Metaethics.

I'll provide more details on these changes when the eBook is out.

Unlike the print and audio-book versions, the eBook version of Rationality: From AI to Zombies will be entirely free. If you want to purchase it on Kindle Store and download it directly to your Kindle, it will also be available on Amazon for $4.99.

To make the content more accessible, the eBook will include introductions I've written up for this purpose. It will also include a LessWrongWiki link to a glossary, which I'll be recruiting LessWrongers to help populate with explanations of references and jargon from the Sequences.

I'll post an announcement to Main as soon as the eBook is available. See you then!

Karma awards for proofreaders of the Less Wrong Sequences ebook

6 alexvermeer 12 December 2013 12:18AM

MIRI is gathering a bunch of Eliezer’s writings into a nicely-edited ebook, currently titled The Hard Part is Actually Changing Your Mind. This book will ultimately be released in various digital formats (Kindle MOBI, EPUB, and PDF). Much of the initial work for this project is complete. What we need now are volunteers to review the book's chapters to:

  • verify that all the content has been correctly transferred (text, equations, and images),
  • proofread for any typographical errors (spelling, punctuation, layout, etc.),
  • verify all internal and external links,
  • and more.

This project has been added to Youtopia, MIRI’s volunteer system. (Click “Register as a Volunteer” here to sign up. Already signed up? Go here.)

LW Karma Bonus

For this special project, every point earned in Youtopia will also earn you 3 karma on LW!

Points are awarded based on the amount of time spent proofreading the book. For example, an hour of work logged in Youtopia earns you 10 points, which will also get you 30 LW karma. Karma is awarded by admins in a publicly-accountable way: all manual karma additions are listed here.

Questions about this project can be directed to alexv@intelligence.org or in the comments.

Teaching rationality to kids?

9 chaosmage 16 October 2013 12:38PM

I'm finally getting around to reading "Thinking, Fast and Slow". Much of it I had already learned on LW and elsewhere. Maybe that's why my strongest impression from the book is how accessible it is. Simple sentences, clear and vivid examples, easy-to-follow exercises, a remarkable lack of references to topics not explained right away.

I caught myself thinking "This is a book I should have read as a kid". In my first language, I think I could have managed it as early as 11 years old. Since measured IQ is strongly influenced by habits of thinking and cognitive returns can be reinvested, I'm sure I would be smarter now if I had.

So I have decided to buy a stack of these books and give them to kids on their, say, 12th birthdays. Then maybe Dan Dennett's "Intuition Pumps" a year later - and HPMOR a year after that? I would like to see more suggestions from you guys.

It should be obviously better to start even earlier. So how do you teach rationality to a nine-year-old? Or a seven-year-old? Has anybody done something like that? Please name books, videos or web sites.

If such media are not available, creating them should be low-hanging fruit in the quest to raise the global IQ and sanity waterline. ELI5 writing is very learnable, after all, and ELI5 type interpretations of, say, the sequences, might be helpful for adults too.

Another Anki deck for Less Wrong content

14 MondSemmel 22 August 2013 07:31PM

Anki decks of Less Wrong content have been shared here before. However, they felt a bit huge (one deck was >1500 cards) and/or not helpful to me. As I go through the sequences, I create Anki cards, and I've decided they are at a point where I can share them. Maybe someone else will benefit from them.

Current content: The deck currently consists of 186 Anki cards (82 Q&A, 104 cloze deletion), covering the following Less Wrong sequences: The Map and the Territory, Mysterious Answers to Mysterious Questions, How to Actually Change Your Mind, A Human's Guide to Words, and Reductionism.
All cards contain an extra field for their source, usually 1-2 Less Wrong posts, rarely a link to Wikipedia. Some mathy cards use LaTeX. I don't know what happens if you don't have LateX installed. Though if this is a problem, I think I can convert the LaTeX code to images with an Anki plugin.

Important caveats:

  1. My cards tend to have more context than those I've seen in most other decks, to the point that one might consider them overloaded with information. That's partly due to personal preference, and partly because I need as much context as possible so I memorize more than just a teacher's password.
  2. In contrast to previously shared Anki decks of Less Wrong content, I do not aim to make this deck comprehensive. Rather, I create cards for content which I understood and which seems suitable for memorization and which seemed particularly useful to me. Conversely, I did not create cards when I couldn't think of a way to memorize something, or when I did not understand (the usefulness of) something. (For instance, Original Seeing and Priming and Contamination did not work for me.)
  3. I've tried a few shared decks so far, and everybody seems to create cards differently. So I'm not sure to which extent this deck can be useful to anyone who isn't me.

Open question: I'm still not sure to which extent I'm memorizing internalized and understood knowledge with these cards, and to which extent they are just fake explanations or attempts to guess at passwords.

And a final disclaimer: The content is mostly taken verbatim from Yudkowsky's sequences, though I've often edited the text so it fit better as an Anki card. I checked the cards thoroughly before making the deck public, but any remaining errors are mine.

I'm thankful for suggestions and other feedback.

Help us name the Sequences ebook

14 lukeprog 15 April 2013 07:59PM

 

Quantum Computing Since Democritus got me thinking that we may want a more riveting title for The Sequences, 2006-2009 ebook we're preparing for release (like the FtIE ebook). Maybe it could be something like [Really Catchy Title]: The Less Wrong Sequences, 2006-2009.

The reason for "2006–2009" is that Highly Advanced Epistemology 101 for Beginners will be its own ebook, and future Yudkowskian LW sequences (if there are any) won't be included either.

 

Example options:

 

  • The Craft of Rationality: The Less Wrong Sequences, 2006–2009
  • The Art of Rationality: The Less Wrong Sequences, 2006–2009
  • Becoming Less Wrong: The Sequences, 2006–2009

In the end, we might just call it The Sequences, 2006–2009, but I'd like to check whether somebody else can come up with a better name.

Suggestions?

(Update on 5/5/2013 is here.)

 

 

Looking for alteration suggestions for the official Sequences ebook

13 alexvermeer 16 October 2012 10:32PM

As you may have heard, the Singularity Institute is in the process of creating an official ebook version of The Sequences (specifically, Eliezer's Major Sequences written between 2006 and 2009). 

Now is an opportune time to make any alterations to the contents of the Sequences. We're looking for suggestions about:

  1. Posts to add to the Sequences. E.g., "scope insensitivity" is not currently a part of any sequence, perhaps it should be? Preferably suggest a specific location, or at least a specific sequence where you think the addition would logically go.
  2. Posts to remove from the Sequences. Are there redundant or unnecessary posts? To call the Sequences long is a bit of an understatement.
  3. Alternatives to "The Sequences" as a title, such as "How to be Less Wrong: The Sequences, 2006--2009."

Put separate suggestions in separate comments so that specific changes can be discussed. All suggestions will be reviewed, with final changes made by Eliezer. Next thing you know, you'll be sipping a hot mocha in your favorite chair while reading about Death Spirals on your handy e-reader.

The Sequences that will be present in the ebook:

Call for volunteers: Publishing the Sequences

13 wedrifid 28 June 2012 03:08PM

The Singularity Institute is in the process of publishing Eliezer Yudkowsky’s Sequences of rationality posts as an electronic book. The Sequences are made up of multiple hundreds of posts. These are being downloaded and converted to LaTeX for publishing programmatically and that’s where the human tasks begin. These will entail:

  • Verifying that all the content has all been transferred, including all text, equations and images.
  • Proofreading for any typographical errors that may have escaped attention thus far.
  • Verifying that all external links are still alive (and replacing any that are not).
  • Creating a bibliography for all material referenced in the chapters (posts).

The recent document publishing efforts at SIAI would not have been possible without the assistance of dedicated volunteers. This new project is the perfect opportunity to help out lesswrong while giving you an excuse to catch up on (or revisit) your reading of some foundational rational thinking material. As an added bonus every post reviewed will save the world with 3.5*epsilon probability.

We need volunteers who are willing to read some sequence posts and have an eye for detail. Anyone interested in contributing should contact me at cameron.taylor [at] singinst [dot] org.

For those more interested in academic papers we also have regular publications (and re-publications) that need proofreading and editing before they are released.

Less Wrong Sequences+Website feed app for Android

14 razor11 25 March 2012 05:27AM

I use my Android phone much more than my computer, and reading the Sequences on a mobile device is a pain. I needed an easy way to access the Sequences, but since there are no apps for this website I had to create one myself. Since I'm no app developer, I used the IBuildApp.com(trustworthy according to my research) website to make the application.

Features:

* Read ALL of the main Sequences and most of the minor ones

* RSS feed to LessWrong.com for latest articles

* No ads!


Drawbacks:

*Requires an Internet connection: I individually copy-pasted each Sequence(from the compilations of posts that many people have made) to the app. Unfortunately, the app development website did not save these on the app itself, but on its server. So to access a Sequence, you require an Internet connection.

*Home screen doesn't look good, because I couldn't get an appropriately sized logo that the website would accept. The Index(where you access the Sequences) looks pretty neat though.


If there are any mobile app developers here, please try to make a better version of it(hopefully one where data is saved offline). I made this for personal use so it's functional but could be done much better by a professional. I'm posting it here for other Android-using people(especially newbies like me) who might find this useful.

Pictures:

           

 

 

 

Download Link: http://174.142.192.87/builds/00101/101077/apps/LessWrongSequences.apk

 

 

 

Beyond the Reach of God, Abridged for Spoken Word

10 Raemon 06 December 2011 06:49PM

Previously, I posted a version of The Gift We Give Tomorrow that was designed to be read aloud. It was significantly abridged, and some portions reworded to flow better from the tongue. I recently finished another part of my project: An abridged version of Beyond the Reach of God. This one doesn’t lend itself as well to something resembling “poetry,” so it’s more a straightforward editing job. The original was 3315 words. The new one is currently 1090. I’m still trying to trim it a little more, if possible. GWGT was 1245, which was around 7 minutes of speaking time, and pushing the limit of how long the piece can be.

For those who were concerned, after paring this down into a collection of some of the most depressing sentences I've ever read, I decided it was NOT necessary to end "Gift We Give Tomorrow" on an echo of this post (although I'm leaving in the part where I reword the "Shadowy Figure" to more directly reference it). That reading will end with the original "Ever so long ago."

 

Beyond the Reach of God:

I remember, from distant childhood, what it's like to live in the world where God exists. Really exists, the way that children and rationalists take all their beliefs at face value.

In the world where God exists, he doesn’t intervene to optimize everything. God won’t make you a sandwich. Parents don't do everything their children ask. There are good arguments against always giving someone what they desire.

I don't want to become a simple wanting-thing, that never has to plan or act or think.

But clearly, there's some threshold of horror, awful enough that God will intervene. I remember that being true, when I believed after the fashion of a child. The God who never intervenes - that's an obvious attempt to avoid falsification, to protect a belief-in-belief. The beliefs of young children really shape their expectations - they honestly expect to see the dragon in their garage. They have no reason to imagine a loving God who never acts. No loving parents, desiring their child to grow up strong and self-reliant, would let their toddler be run over by a car.

But what if you built a simulated universe? Could you escape the reach of God? Simulate sentient minds, and torture them? If God's watching everywhere, then of course trying to build an unfair world results in the God intervening - stepping in to modify your transistors. God is omnipresent. There’s no refuge anywhere for true horror.

Life is fair.

But suppose you ask the question: Given such-and-such initial conditions, and given such-and-such rules, what would be the mathematical result?

Not even God can change the answer to that question.

What does life look like, in this imaginary world, where each step follows only from its immediate predecessor? Where things only ever happen, or don't happen, because of mathematical rules? And where the rules don't describe a God that checks over each state? What does it look like, the world of pure math, beyond the reach of God?

That world wouldn't be fair. If the initial state contained the seeds of something that could self-replicate, natural selection might or might not take place. Complex life might or might not evolve. That life might or might not become sentient. That world might have the equivalent of conscious cows, that lacked hands or brains to improve their condition. Maybe they would be eaten by conscious wolves who never thought that they were doing wrong, or cared.

If something like humans evolved, then they would suffer from diseases - not to teach them any lessons, but only because viruses happened to evolve as well. If the people of that world are happy, or unhappy, it might have nothing to do with good or bad choices they made. Nothing to do with free will or lessons learned. In the what-if world, Genghis Khan can murder a million people, and laugh, and be rich, and never be punished, and live his life much happier than the average. Who would prevents it?

And if the Khan tortures people to death, for his own amusement? They might call out for help, perhaps imagining a God. And if you really wrote the program, God *would* intervene, of course. But in the what-if question, there isn't any God in the system. The victims will be saved only if the right cells happen to be 0 or 1. And it's not likely that anyone will defy the Khan; if they did, someone would strike them with a sword, and the sword would disrupt their organs and they would die, and that would be the end of that. 

So the victims die, screaming, and no one helps them. That is the answer to the what-if question.

...is this world starting to sound familiar?

Could it really be that sentient beings have died, absolutely, for millions of years.... with no soul and no afterlife... not as any grand plan of Nature. Not to teach us about the meaning of life. Not even to teach a profound lesson about what is impossible.

Just dead. Just because.

Once upon a time, I believed that the extinction of humanity was not allowed. And others, who call themselves rationalists, may yet have things they trust. They might be called "positive-sum games", or "democracy", or “capitalism”, or "technology", but they’re sacred. They can't lead to anything really bad, not without a silver lining. The unfolding history of Earth can't ever turn from its positive-sum trend to a negative-sum trend. Democracies won't ever legalize torture. Technology has done so much good, that there can't possibly be a black swan that breaks the trend and does more harm than all the good up until this point.

Anyone listening, who still thinks that being happy counts for more than anything in life, well, maybe they shouldn't ponder the unprotectedness of their existence. Maybe think of it just long enough to sign up themselves and their family for cryonics, or write a check to an existential-risk-mitigation agency now and then. Or at least wear a seatbelt and get health insurance and all those other dreary necessary things that can destroy your life if you miss that one step... but aside from that, if you want to be happy, meditating on the fragility of life isn't going to help.

But I'm speaking now to those who have something to protect.

What can a twelfth-century peasant do to save themselves from annihilation? Nothing. Nature's challenges aren't always fair. When you run into a challenge that's too difficult, you suffer the penalty; when you run into a lethal penalty, you die. That's how it is for people, and it isn't any different for planets. Someone who wants to dance the deadly dance with Nature needs to understand what they're up against: Absolute, utter, exceptionless neutrality.

And knowing this might not save you. It wouldn't save a twelfth-century peasant, even if they knew. If you think that a rationalist who fully understands the mess they're in, must be able to find a way out - well, then you trust rationality. Enough said.

Still, I don't want to create needless despair, so I will say a few hopeful words at this point:

If humanity's future unfolds in the right way, we might be able to make our future fair(er). We can't change physics. But we can build some guardrails, and put down some padding.

Someday, maybe, minds will be sheltered. Children might burn a finger or lose a toy, but they won't ever be run over by cars. A super-intelligence would not be intimidated by a challenge where death is the price of a single failure. The raw universe wouldn't seem so harsh, would be only another problem to be solved.

The problem is that building an adult is itself an adult challenge. That's what I finally realized, years ago.

If there is a fair(er) universe, we have to get there starting from this world - the neutral world, the world of hard concrete with no padding. The world where challenges are not calibrated to your skills, and you can die for failing them.

What does a child need to do, to solve an adult problem?

An EPub of Eliezer's blog posts

40 ciphergoth 11 August 2011 02:20PM

Update 2015-03-21: I would now strongly recommend reading Rationality: From AI to Zombies over this. Though the blog posts I collected here are the starting point for that book, considerable work has gone into selecting and arranging the essays as well as adding thoughtful new material and useful material not in this collection. Only if you've already read that should you consider starting on this; you can always skip the essays you've already read.

This is all Eliezer's posts to Less Wrong up to the end of 2010 as an EPub. Can be read with Aldiko and other eBook readers, though you might have to jump through some hoops on the Kindle (haven't tried it). I shared it privately with a few friends in the past, but I thought it might be more generally useful.  Highlights include that all the screwed-up Unicode is fixed AFAIK.

Source code.

Update: have now made a MOBI for the Kindle too.

Updated 2011-08-13 17:20 BST: Now with images!

The Sequences in MP3 Format

12 r_claypool 08 July 2011 07:40PM

I can drive and listen, but I can't drive and read!  The same is true for most kinds of exercise.

If you are in my situation - wanting to read the sequences without having enough time - feel free to download these audio files for your smart phone or MP3 player.

My vision is to build a podcast feed or downloadable MP3 repository of all the major sequences.  The files I have now are not organized enough to scale out to hundreds of posts, and some of the artifacts of text-to-speech could be reduced with the right pre-processing.  Before I spend more time on this, I want the right tools and process in place. 

Any ideas on how to to proceed?  Would you like to help?  How should I publish these files?

 

Weird characters in the Sequences

5 ciphergoth 18 November 2010 08:27AM

When the sequences were copied from Overcoming Bias to Less Wrong, it looks like something went very wrong with the character encoding.  I found the following sequences of HTML entities in words in the sequences:

 

’ê d?tre

Å« M?lamadhyamaka

ĂŚ Ph?drus

— arbitrator?i window?and

ĂŞ b?te m?me

… over?and

รก H?jek

ĂƒÂź G?nther

ĂŠ fianc?e proteg?s d?formation d?colletage am?ricaine d?sir

ĂƒÂŻ na?ve na?vely

ō sh?nen

ö Schr?dinger L?b

ยง ?ion

ĂƒÂś Schr?dinger H?lldobler

Ăź D?sseldorf G?nther

– ? Church? miracles?in Church?Turing

’ doesn?t he?s what?s let?s twin?s aren?t I?ll they?d ?s you?ve else?s EY?s Whate?er punish?d There?s Caledonian?s isn?t harm?s attack?d I?m that?s Google?s arguer?s Pascal?s don?t shouldn?t can?t form?d controll?d Schiller?s object?s They?re whatever?s everybody?s That?s Tetlock?s S?il it?s one?s didn?t Don?t Aslan?s we?ve We?ve Superman?s clamour?d America?s Everybody?s people?s you?d It?s state?s Harvey?s Let?s there?s Einstein?s won?t

ĂĄ Alm?si Zolt?n

ĂŤ pre?mpting re?valuate

≠ ?

è l?se m?ne accurs?d

รฐ Ver?andi

→ high?low low?high

’ doesn?t

ā k?rik Siddh?rtha

รถ Sj?berg G?delian L?b Schr?dinger G?gel G?del co?rdinate W?hler K?nigsberg P?lzl

ĂŻ na?vet

  I?understood ? I?was

Ăś Schr?dinger

ĂŽ pla?t

úñ N?ez

Ĺ‚ Ceg?owski

— PEOPLE?and smarter?supporting to?at problem?and probability?then valid?to opportunity?of time?in true?I view?wishing Kyi?and ones?such crudely?model stupid?which that?larger aside?from Ironically?but intelligence?such flower?but medicine?as

‐ side?effect galactic?scale

´ can?t Biko?s aren?t you?de didn?t don?t it?s

≠ P?NP

窶馬 basically?ot

Ĺ‘ Erd?s

Now, an example like "ö Schr?dinger L?b" I can decode: "C3 B6" is the byte sequence for the UTF-8 encoding of "U+00F6 ö LATIN SMALL LETTER O WITH DIAERESIS".  But "úñ" is not a valid UTF-8 sequence - and those that contain entities larger than 255 are very mysterious.  Anyone able to make any guesses?
EDIT: รถ translated into Windows codepage 874 is C3 B6!