Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Weirdness at the wiki

0 NancyLebovitz 30 November 2015 11:37PM

Richard Kennaway has posted about an edit war on the wiki. Richard, thank you.

Unfortunately, I've only used the wiki a little, and don't have a feeling for why the edit history for an article is inaccessible. Is the wiki broken or has someone found a way to hack it? Let it be known that hacking the wiki is something I'll ban for.

VoiceofRa, I'd like to know why you deleted Gleb's article. Presumably you have some reason for why you think it was unsatisfactory.

I'm also notifying tech in the hope of finding out what happened to the edit history.

[link] Pedro Domingos: "The Master Algorithm"

0 GMHowe 30 November 2015 10:28PM

Interesting talk outlining five different approaches to AI.



Blurb from the YouTube description:

Machine learning is the automation of discovery, and it is responsible for making our smartphones work, helping Netflix suggest movies for us to watch, and getting presidents elected. But there is a push to use machine learning to do even more—to cure cancer and AIDS and possibly solve every problem humanity has. Domingos is at the very forefront of the search for the Master Algorithm, a universal learner capable of deriving all knowledge—past, present and future—from data. In this book, he lifts the veil on the usually secretive machine learning industry and details the quest for the Master Algorithm, along with the revolutionary implications such a discovery will have on our society.

Pedro Domingos is a Professor of Computer Science and Engineering at the University of Washington, and he is the cofounder of the International Machine Learning Society.

Playing offense

-3 artemium 30 November 2015 01:59PM

There is a pattern I noticed that appears whenever some new interesting idea or technology gains some media traction above certain threshold. Those who are considered opinion-makers (journalists, intellectuals) more often than not write about this new movement/idea/technology in a way that it somewhere between cautions and negative. As this happens those who adopted this new ideas/technologies became somewhat weary of promoting it and in fear of a low status decide to retreat from their public advocacy of the mentioned ideas. 

I was wondering that maybe in some circumstances, the right move for those that are getting the negative attention is not to defend themselves but instead to go on a offense. And one the most interesting offensive tactic might be is to try to reverse the framing of the subject matter and put the burden of the argument on the critics in a way that requires them to seriously reconsider their position: 

  • Those who are critical of this idea are actually doing something wrong and they are unable to see the magnitude of their mistake 
  • The fact that they are not adopting this idea/product has a big cost that they are not aware of, and in reality they are the ones making a weird choice  
  • They already adopted that idea/position but they don't notice it as it not framed in context they understand or find comfortable
In all of this cases it the critic is usually stuck the status competition that prevents them to analyse situation objectively, and additionally he feels safety in numbers as there are a lot of people who are similarly criticising this idea.

So lets start with Facebook.

When Facebook was expanding rapidly and was predicted to dominate social media market (2008-2010) it became one of the most talked about subject in the public sphere. And the usual attitude towards Facebook from the 'intellectual' press and almost everyone who considered himself an independent thinker was usually negative. The Facebook was this massive corporate behemoth who is assimilating people in its creepy virtual world full of pokes, likes and Farmvilles. It didn't help that its CEO was the guy who had "I am a CEO bitch" written on his business card and walked in his flip-flops to the business meetings. 

I remember endless articles with titles like "Facebook is killing real life frendships", "Facebook is creepy corporate product which wants to diminish your identity ", "Why I am not on facebook", "I left facebook and that was the best decision ever!" At that time everyone who said "Actually I am not on facebook" was a sure way to gain a high status, as someone who refuses to become another FB drone. And those who were on facebook always felt the need to apologize for their decision "Yeah, facebook sucks, l am only there to stay in touch with my high school pals" 

The climax was reached with the "Social network", movie which presented Mark Zuckenberg and founding of facebook as some kind of Batman villain origin story (and it was grossly inaccurate for anyone who actually knows the facts). 

But then Fahrad Manjoo published an article that asked a simple question "Why are you not on facebook?" In it he reversed the framing of the story and presented facebook as something that is the new normal and being a holdout is a weird thing that should get a strange looks. His message was something like: "Actually it is you people who are not on facebook have to explain yourself. Facebook won, it is convenient tool for communication, almost everyone is there and you should get over your high horse and join the civilized world." 

The site has crossed a threshold—it is now so widely trafficked that it's fast becoming a routine aid to social interaction, like e-mail and antiperspirant.

In others words, if you are not on facebook in 2010, you are not brave intellectual maverick standing up against an evil empire.  You are like a cranky old man from 1950s who refuses to own a telephone because he is scared of the demon-machine. And this is making very inconvenient for his relatives to contact him. 

There is probably a lot of whats wrong with Manjoo's approach, some of it would fall under the 'dark arts' arsenal. And to be fair a lot of criticism of Facebook has a point especially after Snowden affair. But I really like Manjoo's  subversive thinking on this issue and the way he pierced the suffocating smugness with a brazen narrative reversal. 

I wonder that this tactic might be useful for other ideas that are slowly entering public space and are similarly getting a nasty look from the "intellectual elite"

Lets look at the Transhumanism and its media portrayal. 

It is important to notice that there is a difference between regular "SF tehnology is cool" and transhumanism. Everyone loves imagining a future world with cool gadgets and flying cars. However, once you start messing out with the human genome or with cybernetic implants things get creepy for a lot of people. When you talk about laser pistols you get heroic rebels fighting stromtroppers with their blasters. When you talk about teleporters and warp drives you get brave Starfleet captain exploring the Galaxy. But when you talk about cybernetic implants you get the Borg, when you talk about genetic enhancement you get Gattaca and when you talk about Immortality you get Voldermort. For the average viewer, technology is good as long it doesn't change what is perceived as 'normal' state of human being.

You can have Star Wars movies where families are watching massive starships destroying entire planets with billions of people but you are not supposed to ask a why is Han Solo so old in the new episode. They solved faster-than-light travel and invented planet-killing lasers but got stuck on longevity research, or at least good anti-aging cosmetics? (yeah I know that it would be expensive to make Harrison Ford look younger, but you get my point)

Basically the mainstream view of the future is "old-fashion humans + jetpacks" and you better not touch the "old fashion" adjective or you will get pattern matched into a creepy dude trying to create utopia , which as we learned from the movies and literature, always makes you a bad guy.

But then in a real world you have a group of smart people who seriously argue that changing the human biology with various weird technologies would actually be a good thing and that we should totally work on reliable ways to increase longevity, intelligence and other abilities and remove any regulations that would stop it. And in response you have much larger group of intellectuals, journalists, politicians and other 'deep thinkers' who are repulsed by this idea and  will immediately start to bludgeon someone who argues that we should improve our natural state.  (I am purposely not mentioning those who question feasibility of transhuman ideas, like if the genetic enhancement is even possible, as this is not relevant here.)

From the political right and religious groups you will instantly hear standard chants of "people playing God" and "destroying the fabric of society ", from the political left you will hear the shouting about "rich silicon valley libertarians trying to recreate feudalism through cognitive inequality and eugenics " and even from political center you will get someone like Francis Fukuyama accusing transhumanism of being "the most dangerous idea of the century" that might "destroy liberal society". Finally you have entire class of people called "bio-ethicitist" whose job description seems to be "bashing transhumanism".   

At best transhumanists are presented as a "well intentioned geeks who are unaware of the bad consequences of their ideas" at worst they will get labeled "rich, entitled Silicon Valley nerds with a bizarre and dangerous pseudo-religious cult " or they can be dismissed altogether from serious conversation and turned into a punchline on "Bing bang theory".

So when transhumanist receive this kind of criticism they would naturally try to soften their arguments and backtrack on their positions, After all many people that are optimistic about the future and sometimes positively talk about human enhancement don't label themselves as transhumanist. But should they do that? What if they "own" their label and go on a offensive? How would narrative reversal look in that case?

Well they could make exact copy of the Manjoo's facebook method and throw "Why are you not a transhumanist?" to their critics. But they can use third method and confidently say: "You are transhumanist too. Actually majority of people are transhumanist but they are not aware of it."

It might sound crazy at first, I mean majority of people usually find tranhsumanist ideas weird and uncanny when they are presented in the usual form. "Designer babies? Nanotech robots inside my body? Memory chips in the brain? Mind uploading??? That is crazy talk!!"

But lets stop for a moment and try to understand what is basis of transhumanism, to reduce it to its core idea. One of the best article ever written about transhumanism is E. Yudkowsy short essay "Simplified humanism". The beauty of this essay is in simplicity, there are no complicated moral theories or deep analysis of various disruptive technologies but just a common sense argument about what are the fundamental values that humans should strive for and how to achieve them . 
As far as a transhumanist is concerned, if you see someone in danger of dying, you should save them; if you can improve someone’s health, you should. There, you’re done. No special cases. 

And it is hard to argue with it. I mean I can't imagine normal person arguing with this statement. But then it gets more interesting

You also don’t ask whether the remedy will involve only “primitive” technologies (like a stretcher to lift the six-year-old off the railroad tracks); or technologies invented less than a hundred years ago (like penicillin) which nonetheless seem ordinary because they were around when you were a kid; or technologies that seem scary and sexy and futuristic (like gene therapy) because they were invented after you turned 18; or technologies that seem absurd and implausible and sacrilegious (like nanotech) because they haven’t been invented yet. 

And at this point we reach the the crux of the issue. It is not that the the values are the problem, but our familiarity of the tools we are using in order to protect our values.

So we can finally define difference between transhumanist and non-transhumanist. Transhumanist is a person who believes that science and technology should be used to make humans happier, smarter and able live as long as possible. Non-transhumanist is usually a person who believes the same except that he technology used in that process should not be too strange compared to the one he is already used to.

Using this definition the pool of people that fall in the transhumanist group whether they admit it or not should rise significantly. 

But actually we can go better then than that. How many people would honestly define themselves as non-transhumanist in this case?

Imagine that you have a lovely 5-year old daughter that got struck by a horrible medical condition. The doctors says to you that there is no cure and that your daughter will suffer severe pains for the several months before she finally dies in agony. 

While you are contemplating the horrible twist of faith another doctor approaches you and says: 

"Well, there is potential cure. We just finished the series of successful trials that cured this condition by using revolutionary medical nanobots. They are injected in the patient bloodstream and then they are able to physically destroy the cancer. If we have your approval we can start the therapy next week.  

Oh... before you answer, there are some serious side-effects we should mention. For the reasons we don't completely understand, the nanobots will significantly increase the hosts IQ and they are able to rejuvenate old cells, so in addition to curing your daughter's disease they will also make her super-smart and allow her to live several hundred years. Now many people have some ethical issues with that, because it will give her unfair advantage over her peers, and once she becomes older it might create feelings of guilt of being superior to everyone else and might make her socially awkward because of it . I know that this is a difficult decision. So what will it be.. horrible death in pain after few months or long fulfilling life with occasional bout of existential angst for being superhuman and feeling unease for being unfairly celebrated for getting all those Nobel prizes and solving world hunger?"

Do you honestly know a single person that would choose that their child would die in pain instead of nanobot therapy? Well I admit that this example is super-contrived but it still represents general idea of transhumanism clearly. The point is that EVERYONE is transhumanist and will quickly dismiss any intellectual posturing when push comes to shove, and when they or their loved ones face the dark spectre of death and suffering .   

And don't forget, if you go to the past just several generations compared to them we are the transhuman beings from utopian future. Just a few centuries ago average human lifespan was only half of what it is now, and on almost any objective measure of well-being humans from the past were living horrible and miserable lives compared to the life we now take for granted. If someone is willing to argue against transhumanism he should also in principle be against all of the advances that made us more healthier, intelligent and wealthier then our ancestors by using technologies that from the point of view of our ancestors were looking more crazy then many SF transhumanist technology would look from our perspective. 

So next time someone starts criticising transhuman beliefs, don't defend your self by trying to retreat from your position and trying to avoid looking weird. Ask him to prove that he isn't transhumainst himself by present transhuman idea in it basic form as stated in the Yudkowsky essay. Ask him why he considers trying to use technology to improve human condition should be considered a bad thing, and let him try to define at which point it becomes a bad thing. 

Playing offense might also work in other domains. 

Effective altruism

Criticism: You are using your nerdy math in a field that should be guided by passion on strong moral convictions.

Response : Actually it is you who should explain why you are not effective altruist. EA has proven track record of using the most effective tools to improve outcomes in charity work on the level that surpasses traditional charities. How do you explain your decision to not use this kind of systemic analysis of your work that would result in better outcomes and more lives saved by your charity? 


Criticism: you are using smartphone only as status symbol. They are unnecessary 

Answer: On the contrary I am using it as useful tool that help me in my everyday activities. Why are you not using smartphone when everyone else recognised their obvious value? Are you aware of the opportunity costs of not using smartphone like being unable to use Google maps and translate in your travels ?

We on LW and in larger rationalist community are used to having a defence posture when we are faced with a broad public scrutiny. In many cases that is correct approach as we should avoid unnecessary hubris. But we should recognise circumstances when we are coming from a position of strength and where we can play more offensive game in order to defeat bad arguments that might still damage our cause if they are not met with strong response.

Open thread, Nov. 30 - Dec. 06, 2015

3 MrMind 30 November 2015 08:05AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

The value of ambiguous speech

4 KevinGrant 30 November 2015 07:58AM

This was going to be a reply in a discussion between ChristianKl and MattG in another thread about conlangs, but their discussion seemed to have enough significance, independent of the original topic, to deserve a thread of its own.  If I'm doing this correctly (this sentence is an after-the-fact update), then you should be able to link to the original comments that inspired this thread here: http://lesswrong.com/r/discussion/lw/n0h/linguistic_mechanisms_for_less_wrong_cognition/cxb2

Is a lack of ambiguity necessary for clear thinking?  Are there times when it's better to be ambiguous?  This came up in the context of the extent to which a conlang should discourage ambiguity, as a means of encouraging cognitive correctness by its users.  It seems to me that something is being taken for granted here, that ambiguity is necessarily an impediment to clear thinking.  And I certainly agree that it can be.  But if detail or specificity are the opposites of ambiguity, then surely maximal detail or specificity is undesirable when the extra information isn't relevant, so that a conlang would benefit from not requiring users to minimize ambiguity.

Moving away from the concept of conlangs, this opens up some interesting (at least to me) questions.  Exactly what does "ambiguity" mean?  Is there, for each speech act, an optimal level of ambiguity, and how much can be gained by achieving it?  Are there reasons why a certain, minimal degree of ambiguity might be desirable beyond avoiding irrelevant information?

Promoting rationality to a broad audience - feedback on methods

10 Gleb_Tsipursky 30 November 2015 04:52AM

We at Intentional Insights​, the nonprofit devoted to promoting rationality and effective altruism  to a broad audience, are finalizing our Theory of Change (a ToC is meant to convey our goals, assumptions, methods, and metrics). Since there's recently been extensive discussion on LessWrong of our approaches to promoting rationality and effective altruism to a broad audience, one that was quite helpful for helping us update, I'd like to share our Theory of Change with you and ask for your feedback.


Here's the Executive Summary:

  • The goal of Intentional Insights is to create a world where all rely on research-based strategies to make wise decisions and lead to mutual flourishing.
  • To achieve this goal, we believe that people need to be motivated to learn and have broadly accessible information about such research-based strategies, and also integrate these strategies into their daily lives through regular practice.
  • We assume that:
    • some natural and intuitive human thinking, feeling, and behavior patterns are flawed in ways that undermine wise decisions.
    • problematic decision making undermines mutual flourishing in a number of life areas.
    • these flawed thinking, feeling, and behavior patterns can be improved through effective interventions.
    • we can motivate and teach people to improve their thinking, feeling, and behavior patterns by presenting our content in ways that combine education and entertainment.
  • Our intervention is helping people improve their patterns of thinking, feeling, and behavior to enable them to make wise decisions and bring about mutual flourishing.
  • Our outputs, what we do, come in the form of online content such as blog entries, videos, etc., on our channels and in external publications, as well as collaborations with other organizations.
  • Our metrics of impact are in the form of anecdotal evidence, feedback forms from workshops, and studies we run on our content.

Here is the full version.


I'd appreciate any feedback on the full version from fellow Less Wrongers, on things like content, concepts, structure, style, grammar, etc. I look forward to updating the organization's goals, assumptions, methods, and metrics based on your thoughts. Thanks!

Smarter humans, not artificial intellegence

-3 wubbles 30 November 2015 03:48AM

I'm writing this article to explain some of the facts that have convinced me that increasing average human intelligence through traditional breeding and genetic manipulation is likelier to reduce existential risks in the short and medium term then studying AI risks, while providing all kinds of side benefits.

Intelligence is useful to achieve goals, including avoiding existential risks. Higher intelligence is associated with many diverse life outcomes improving, from health to wealth. Intelligence may have synergistic effects on economic growth, where average levels of intelligence matter more for wealth then individual levels. Intelligence is a polygenetic trait with strong heritability. Sexual selection in the Netherlands has resulted in extreme increases in average height over the past century: sexual selection for intelligence might do the same. People already select partners for intelligence, and egg donors are advertised by SAT score.

AI research seems to be intelligence constrained. Very few of those capable of making a contribution are aware of the problem, or find it interesting. The Berkeley-MIRI seminar has increased the pool of those aware of the problem, but the total number of AI safety researchers remain small. So far very foundational problems remain to be solved. This is likely to take a very long time: it is not unusual for mathematical fields to take centuries to develop. Furthermore, we can work on both strategies at once and observe spillover from one into the other, as the larger intelligence baseline translates into an increase on the right tail of the distribution.

How could we accomplish this? One idea, invented by Robert Heinlein, as far as I know, is to subsidize marriages between people of higher than usual intelligence and their having children.  This idea has the benefit of being entirely non-coercive. It is however unclear how much these subsidies would need to be to influence behavior, and given the strong returns to intelligence in life outcomes, unclear that they can further influence behavior.

Another idea would be to conduct genetic studies to find genes which influence genetics, and conduct genome modification. This plan suffers from illegality, lack of knowledge of genetic factors of intelligence,  and absence of effective means for genome editing (they tried CRISPR on human embryos: more work is needed). However, the result of this work can be sold for money, thus opening the possibility of using VC money to develop it. Illegality can be dealt with by influencing jurisdictions. However, the impact is likely to be limited due to the cost of these methods which will prevent them from having population-wide influence, instead becoming yet another advantage the affluent attempt to purchase. These techniques are likely to have vastly wider application, and so will be commercially developed anyway.

In conclusion genetic modification of humans to increase intelligence is practical in the near terms, and it may be worth diverting some effort to investigating it further.

Linguistic mechanisms for less wrong cognition

6 KevinGrant 29 November 2015 02:40AM

I'm working on a conlang (constructed language) and would like some input from the Less Wrong community.  One of the goals is to investigate the old Sapir-Whorf hypothesis regarding language affecting cognition.  Does anyone here have any ideas regarding linguistic mechanisms that would encourage more rational thinking, apart from those that are present in the oft-discussed conlangs e-prime, loglan, and its offshoot lojban?  Or perhaps mechanisms that are used in one of those conlangs, but might be buried too deeply for a person such as myself, who only has superficial knowledge about them, to have recognized?  Any input is welcomed, from other conlangs to crazy ideas.


Mind uploading from the outside in

6 Alexandros 29 November 2015 02:05AM

Most discussion of uploading talks of uploading from the inside out: simply, a biological person undergoes a disruptive procedure which digitises their mind. The digital mind then continues the person’s timeline as a digital existence, with all that entails.


The thing that stands out here is the disruptive nature of the process from biological to digital being. It is not only a huge step to undergo such a transformation, but few things in reality operate in such binary terms. More commonly, things happen gradually.


Being an entrepreneur and also having a keen interest in the future, I both respect audacious visions, and study how they come to be realised. Very rarely does progress come from someone investing a bunch of resources in a black-box process that ends in a world-changing breakthrough. Much more commonly, massive innovations are realised through a process of iteration and exploration, fueled by a need that motivates people to solve thousands of problems, big and small. Massive trends interact with other innovations to open up opportunities that when exploited cause a further acceleration of innovation. Every successful startup and technology, from Facebook to Tesla and from mobile phones to modern medicine can be understood in these terms.


With this lens in mind, how might uploading be realised? This is one potential timeline, barring AI explosion or existential catastrophy.


It is perhaps useful to explore the notion of “above/below the API”. A slew of companies have formed, often called “Uber for X” or “AirBnB for Y”, solving needs we have, through a computer system, such as a laptop or a mobile phone app. The app might issue a call to a server via an API, and that server may delegate the task to some other system, often powered by other humans. The original issuer of the command then gets their need covered, minimising direct contact with other humans, the traditional way of having our needs covered. It is crucial to understand that API-mediated interactions win because they are superior to their traditional alternative. Once they were possible, it was only natural for them to proliferate. As an example, compare the experience of taking a taxi with using Uber.


And so computer systems are inserted between human-to-human interactions. This post is composed on a computer, through which I will publish it in a digital location, where it might be seen by others. If I am to hear their response to it, it will also be mediated by APIs. Whenever a successful new API is launched, fortunes are made and lost. An entire industry, venture capital, exists to fund efforts to bring new APIs into existence, each new API making life easier for its users than what came before, and adding additional API layers.


As APIs flood interpersonal space, humans gain superpowers. Presence is less and less important, and a person anywhere in the connected world can communicate and effect change anywhere else. And with APIs comes control of personal space and time. Personal safety increases both by decreasing random physical contact and by always being connected to others who can send help if something goes wrong. The demand for connectivity and computation is driving networking everywhere, and the cost of hardware to fall through the floor.


Given the trends that are in motion, what’s next? Well, if computer-mediated experience is increasing, it might grow to the point where every interaction a human has with the world around them will be mediated by computers. If this sounds absurd, think of noise-cancelling headphones. Many of us now use them not to listen to music, but to block the sound from our environment. Or consider augmented reality. If the visual field, the data pipeline of the brain, can be used to provide critical, or entertaining, context about the physical environment, who would want to forego it? Consider biofeedback: if it’s easy to know at all times what is happening within our bodies and prevent things from going wrong, who wouldn’t want to? It’s not a question of whether these needs exist, but of when technology will be able to cover them.


Once most interaction is API-mediated, the digital world switches from opt-in to opt-out. It’s not a matter of turning the laptop on, but of turning it off for a while, perhaps to enjoy a walk in nature, or for a repair. But wouldn’t you want to bring your augmented reality goggles that can tell you the story of each tree, and ensure you’re not exposed to any pathogens as you wander in the biological jungle? As new generations grow up in a computer-mediated world, fewer and fewer excursions into the offline will happen. Technology, after all, is what was invented after you were born. Few of us consider hunting and gathering our food or living in caves to be a romantic return to the past. When we take a step backward, perhaps to signal virtue, like foregoing vaccination or buying locally grown food, we make sure our move will not deprive us of the benefits of the modern world.


Somewhere around the time when APIs close the loop around us or even before then, the human body will begin to be modified. Artificial limbs that are either plainly superior to their biological counterparts, or better adapted to that world will make sense, and brain-computer interfaces (whether direct or via the existing senses) will become ever more permanent. As our bodies are replaced with mechanical parts, the brain will come next. Perhaps certain simple parts will be easy to replace with more durable, better performing ones. Intelligence enhancement will finally be possible by adding processing power natural selection alone could never have evolved. Gradually, step by small step, the last critical biological components will be removed, as a final cutting of the cord with the physical world.


Humans will have digitised themselves, not by inventing a machine that takes flesh as input and outputs ones and zeroes, not by cyberpunk pioneers jumping into an empty digital world to populate it. We will have done it by making incremental choices, each one a sound rational decision that was in hindsight inevitable, incorporating inventions that made sense, and in the end it will be unclear when the critical step was made. We will have uploaded ourselves simply in the course of everyday life.

[link] Desiderata for a model of human values

3 Kaj_Sotala 28 November 2015 07:25PM


Soares (2015) defines the value learning problem as

By what methods could an intelligent machine be constructed to reliably learn what to value and to act as its operators intended?

There have been a few attempts to formalize this question. Dewey (2011) started from the notion of building an AI that maximized a given utility function, and then moved on to suggest that a value learner should exhibit uncertainty over utility functions and then take “the action with the highest expected value, calculated by a weighted average over the agent’s pool of possible utility functions.” This is a reasonable starting point, but a very general one: in particular, it gives us no criteria by which we or the AI could judge the correctness of a utility function which it is considering.

To improve on Dewey’s definition, we would need to get a clearer idea of just what we mean by human values. In this post, I don’t yet want to offer any preliminary definition: rather, I’d like to ask what properties we’d like a definition of human values to have. Once we have a set of such criteria, we can use them as a guideline to evaluate various offered definitions.

The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism.

15 diegocaleiro 28 November 2015 11:07AM
This text has many, many hyperlinks, it is useful to at least glance at frontpage of the linked material to get it. It is an expression of me thinking so it has many community jargon terms. Thank Oliver Habryka, Daniel Kokotajlo and James Norris for comments. No, really, check the front page of the hyperlinks. 
  • Why I Grew Skeptical of Transhumanism
  • Why I Grew Skeptical of Immortalism
  • Why I Grew Skeptical of Effective Altruism
  • Only Game in Town


Wonderland’s rabbit said it best: The hurrier I go, the behinder I get.


We approach 2016, and the more I see light, the more I see brilliance popping everywhere, the Effective Altruism movement growing, TEDs and Elons spreading the word, the more we switch our heroes in the right direction, the behinder I get. But why? - you say.

Clarity, precision, I am tempted to reply. I have left the intellectual suburbs of Brazil, straight into the strongest hub of production of things that matter, The Bay Area, via Oxford’s FHI office, I now split my time between UC Berkeley, and the CFAR/MIRI office. In the process, I have navigated an ocean of information, read hundreds of books, papers, saw thousands of classes, became proficient in a handful of languages and a handful of intellectual disciplines. I’ve visited the Olympus and I met our living demigods in person as well.

Against the overwhelming forces of an extremely upbeat personality surfing a hyper base-level happiness, these three forces: approaching the center, learning voraciously, and meeting the so-called heroes, have brought me to the current state of pessimism.

I was a transhumanist, an immortalist, and an effective altruist.


Why I Grew Skeptical of Transhumanism

The transhumanist in me is skeptical of technological development fast enough for improving the human condition to be worth it now, he sees most technologies as fancy toys that don’t get us there. Our technologies can’t and won’t for a while lead our minds to peaks anywhere near the peaks we found by simply introducing weirdly shaped molecules into our brains. The strangeness of Salvia, the beauty of LSD, the love of MDMA are orders and orders of magnitude beyond what we know how to change from an engineering perspective. We can induce a rainbow, but we don’t even have the concept of force yet. Our knowledge about the brain, given our goals about the brain, is at the level of knowledge of physics of someone who found out that spraying water on a sunny day causes the rainbow. It’s not even physics yet.

Believe me, I have read thousands of pages of papers in the most advanced topics in cognitive neuroscience, my advisor spent his entire career, from Harvard to Tenure, doing neuroscience, and was the first person to implant neurons that actually healed a brain to the point of recovering functionality by using non-human neurons. As Marvin Minsky, who invented the multi-agent computational theory of mind, told me: I don’t recommend entering a field where every four years all knowledge is obsolete, they just don’t know it yet.


Why I Grew Skeptical of Immortalism

The immortalist in me is skeptical because he understands the complexity of biology from conversations with the centimillionaires and with the chief scientists of anti-aging research facilities worldwide, he met the bio-startup founders and gets that the structure of incentives does not look good for bio-startups anyway, so although he was once very excited about the prospect of defeating the mechanisms of ageing, back when less than 300 thousand dollars were directly invested in it, he is now, with billions pledged against ageing, confident that the problem is substantially harder to surmount than the number of man-hours left to be invested in the problem, at least during my lifetime, or before the Intelligence Explosion.

Believe me, I was the first cryonicist among the 200 million people striding my country, won a prize for anti-ageing research at the bright young age of 17, and hang out on a regular basis with all the people in this world who want to beat death that still share in our privilege of living, just in case some new insight comes that changes the tides, but none has come in the last ten years, as our friend Aubrey will be keen to tell you in detail.


Why I Grew Skeptical of Effective Altruism

The Effective Altruist is skeptical too, although less so, I’m still founding an EA research institute, keeping a loving eye on the one I left behind, living with EAs, working at EA offices and mostly broadcasting ideas and researching with EAs. Here are some problems with EA which make me skeptical after being shook around by the three forces:

  1. The Status Games: Signalling, countersignalling, going one more meta-level up, outsmarting your opponent, seeing others as opponents, my cause is the only true cause, zero-sum mating scarcity, pretending that poly eliminates mating scarcity, founders X joiners, researchers X executives, us institutions versus them institutions, cheap individuals versus expensive institutional salaries, it's gore all the way up and down.

  2. Reasoning by Analogy: Few EAs are able to and doing their due intellectual diligence. I don’t blame them, the space of Crucial Considerations is not only very large, but extremely uncomfortable to look at, who wants to know our species has not even found the stepping stones to make sure that what matters is preserved and guaranteed at the end of the day? It is a hefty ordeal. Nevertheless, it is problematic that fewer than 20 EAs (one in 300?) are actually reasoning from first principles, thinking all things through from the very beginning. Most of us are looking away from at least some philosophical assumption or technological prediction. Most of us are cooks and not yet chefs. Some of us have not even waken up yet.

  3. Babies with a Detonator: Most EAs still carry their transitional objects around, clinging desperately to an idea or a person they think more guaranteed to be true, be it hardcore patternism about philosophy of mind, global aggregative utilitarianism, veganism, or the expectation of immortality.

  4. The Size of the Problem: No matter if you are fighting suffering, Nature, Chronos (death), Azathoth (evolutionary forces) or Moloch (deranged emergent structures of incentives), the size of the problem is just tremendous. One completely ordinary reason to not want to face the problem, or to be in denial, is the problem’s enormity.

  5. The Complexity of The Solution: Let me spell this out, the nature of the solution is not simple in the least. It’s possible that we luck out and it turns out the Orthogonality Thesis and the Doomsday Argument and Mind Crime are just philosophical curiosities that have no practical bearing in our earthly engineering efforts, that the AGI or Emulation will by default fall into an attractor basin which implements some form of MaxiPok with details that it only grasps after CEV or the Crypto, and we will be Ok. It is possible, and it is more likely than that our efforts will end up being the decisive factor. We need to focus our actions in the branches where they matter though.

  6. The Nature of the Solution: So let’s sit down side by side and stare at the void together for a bit. The nature of the solution is getting a group of apes who just invented the internet from everywhere around the world, and get them to coordinate an effort that fills in the entire box of Crucial Considerations yet unknown - this is the goal of Convergence Analysis, by the way - find every single last one of them to the point where the box is filled, then, once we have all the Crucial Considerations available, develop, faster than anyone else trying, a translation scheme that translates our values to a machine or emulation, in a physically sound and technically robust way (that’s if we don’t find a Crucial Consideration otherwise which, say, steers our course towards Mars). Then we need to develop the engineering prerequisites to implement a thinking being smarter than all our scientists together who can reflect philosophically better than the last two thousand years of effort while becoming the most powerful entity in the universe’s history, that will fall into the right attractor basin within mindspace. That’s if Superintelligences are even possible technically. Add to that we or it have to guess correctly all the philosophical problems that are A)Relevant B)Unsolvable within physics (if any) or by computers, all of this has to happen while the most powerful corporations, States, armies and individuals attempt to seize control of the smart systems themselves. without being curtailed by the hindrance counter incentive of not destroying the world either because they don’t realize it, or because the first mover advantage seems worth the risk, or because they are about to die anyway so there’s not much to lose.

  7. How Large an Uncertainty: Our uncertainties loom large. We have some technical but not much philosophical understanding of suffering, and our technical understanding is insufficient to confidently assign moral status to other entities, specially if they diverge in more dimensions than brain size and architecture. We’ve barely scratched the surface of technical understanding on happiness increase, and philosophical understanding is also in its first steps.

  8. Macrostrategy is Hard: A Chess Grandmaster usually takes many years to acquire sufficient strategic skill to command the title. It takes a deep and profound understanding of unfolding structures to grasp how to beam a message or a change into the future. We are attempting to beam a complete value lock-in in the right basin, which is proportionally harder.

  9. Probabilistic Reasoning = Reasoning by Analogy: We need a community that at once understands probability theory, doesn’t play reference class tennis, and doesn’t lose motivation by considering the base rates of other people trying to do something, because the other people were cooks, not chefs, and also because sometimes you actually need to try a one in ten thousand chance. But people are too proud of their command of Bayes to let go of the easy chance of showing off their ability to find mathematically sound reasons not to try.

  10. Excessive Trust in Institutions: Very often people go through a simplifying set of assumptions that collapses a brilliant idea into an awful donation, when they reason:
    I have concluded that cause X is the most relevant
    Institution A is an EA organization fighting for cause X
    Therefore I donate to institution A to fight for cause X.
    To begin with, this is very expensive compared to donating to any of the three P’s: projects, people or prizes. Furthermore, the crucial points to fund institutions are when they are about to die, just starting, or building a type of momentum that has a narrow window of opportunity where the derivative gains are particularly large or you have private information about their current value. To agree with you about a cause being important is far from sufficient to assess the expected value of your donation.

  11. Delusional Optimism: Everyone who like past-me moves in with delusional optimism will always have a blind spot in the feature of reality about which they are in denial. It is not a problem to have some individuals with a blind spot, as long as the rate doesn’t surpass some group sanity threshold, yet, on an individual level, it is often the case that those who can gaze into the void a little longer than the rest end up being the ones who accomplish things. Staring into the void makes people show up.

  12. Convergence of opinions may strengthen separation within EA:  Thus far, the longer someone is an EA for, the more likely they are to transition to an opinion in the subsequent boxes in this flowchart from whichever box they are at at the time. There are still people in all the opinion boxes, but the trend has been to move in that flow. Institutions however have a harder time escaping being locked into a specific opinion. As FHI moves deeper into AI, and GWWC into poverty, 80k into career selection etc… they become more congealed. People’s opinions are still changing, and some of the money follows, but institutions are crystallizing into some opinions, and in the future they might prevent transition between opinion clusters and free mobility of individuals, like national frontiers already do. Once institutions, which in theory are commanded by people who agree with institutional values, notice that their rate of loss towards the EA movement is higher than their rate of gain, they will have incentives to prevent the flow of talent, ideas and resources that has so far been a hallmark of Effective Altruism and why many of us find it impressive, it’s being an intensional movement. Any part that congeals or becomes extensional will drift off behind, and this may create unsurmountable separation between groups that want to claim ‘EA’ for themselves.


Only Game in Town


The reasons above have transformed a pathological optimist into a wary skeptical about our future, and the value of our plans to get there. And yet, I don’t see other option than to continue the battle. I wake up in the morning and consider my alternatives: Hedonism, well, that is fun for a while, and I could try a quantitative approach to guarantee maximal happiness over the course of the 300 000 hours I have left. But all things considered, anyone reading this is already too close to the epicenter of something that can become extremely important and change the world to have the affordance to wander off indeterminately. I look at my high base-happiness and don’t feel justified in maximizing it up to the point of no marginal return, there clearly is value elsewhere than here (points inwards), clearly the self of which I am made has strong altruistic urges anyway, so at least above a threshold of happiness, has reason to purchase the extremely good deals in expected value happiness of others that seem to be on the market. Other alternatives? Existentialism? Well, yes, we always have a fundamental choice and I feel the thrownness into this world as much as any Kierkegaard does. Power? When we read Nietzsche it gives that fantasy impression that power is really interesting and worth fighting for, but at the end of the day we still live in a universe where the wealthy are often reduced to having to spend their power in pathetic signalling games and zero sum disputes or coercing minds to act against their will. Nihilism and Moral Fictionalism, like Existentialism all collapse into having a choice, and if I have a choice my choice is always going to be the choice to, most of the time, care, try and do.

Ideally, I am still a transhumanist and an immortalist. But in practice, I have abandoned those noble ideals, and pragmatically only continue to be an EA.

It is the only game in town.

Weekly LW Meetups

1 FrankAdamek 27 November 2015 05:06PM

The meaning of words

-4 casebash 27 November 2015 05:15AM

This article aims to challenge the notion that the meaning of the words should and must be understood as the propositional or denotation content, in preference to the implied or connotational content. This is an assumption that I held for most of my life and which I suspect a great deal of aspiring rationalists will naturally tend towards. But before I begin, I must first clarify the argument that I am making. When a rationalist is engaged in conversation, it is very likely that they are seeking truth and that they want (or would at least claim to want) to know the truth regardless of the emotions that it might stir up. Emotions are seen as something that must be overcome and subjected to logic. The person who would object to statement due to its phrasing, rather than its propositional content is seen as acting irrationally. And these beliefs are indeed these are true to a large extent. Those who hide from emotions are often coddling themselves and those who object due to phrasing are often subverting the rules of fair play. But there are also situations where using particular words necessarily implies more than the strict denotational content and trying to ignore these connotations is foolhardy. For many people, this last sentence alone may be all that needs to be said on this topic, but I believe that there is still some value in breaking down precisely what words actually mean.

So why is there a widespread belief within certain circles that the meaning of a word or sentence is its denotational content? I would answer that this is a result of a desire to enforce norms that result in productive conversation. In general conversation, people will often take offense in a way that derails the conversation into a discussion of what is or is not offensive, instead of substantive disagreements. One way to address this problem is to create a norm that each person should only be criticised on their denotations, rather connotations. In practise, it is considerably more complicated as particularly blatant connotations will be treated by denotations, but this is a minor point. The larger point is that meaning consisting of purely the connotations is merely a social norm within a particular context and not an absolute truth.

This means that when the social norms are different and people complain about connotations in other social settings, the issues isn't that they don't understand how words work. The issue isn't that they can't tell the difference between a connotation and a denotation. The issue is that they are operating within different social norms. Sometimes people are defecting from these norms, such as when they engage in an excessively motivated reading, but this isn't a given. Instead, it must be seen the operating within a framework of meaning as denotation is merely a social, not an objective, norm, regardless of this norm’s considerable merits.

Omega's Idiot Brother, Epsilon

4 OrphanWilde 25 November 2015 07:57PM

Epsilon walks up to you with two boxes, A and b, labeled in rather childish-looking handwriting written in crayon.

"In box A," he intones, sounding like he's trying to be foreboding, which might work better when he hits puberty, "I may or may not have placed a million of your human dollars."  He pauses for a moment, then nods.  "Yes.  I may or may not have placed a million dollars in this box.  If I expect you to open Box B, the million dollars won't be there.  Box B will contain, regardless of what you do, one thousand dollars.  You may choose to take one box, or both; I will leave with any boxes you do not take."

You've been anticipating this.  He's appeared to around twelve thousand people so far.  Out of eight thousand people who accepted both boxes, eighty found the million dollars missing, and walked away with $1,000; the other seven thousand nine hundred and twenty people walked away with $1,001,000 dollars.  Out of the four thousand people who opened only box A, only four found it empty.

The agreement is unanimous: Epsilon is really quite bad at this.  So, do you one-box, or two-box?

There are some important differences here with the original problem.  First, Epsilon won't let you open either box until you've decided whether to open one or both, and will leave with the other box.  Second, while Epsilon's false positive rate on identifying two-boxers is quite impressive, making mistakes about one-boxers only .1% of the time, his false negative rate is quite unimpressive - he catches 1% of everybody who engages in it.  Whatever heuristic he's using, clearly, he prefers to let two-boxers slide than to accidentally punish one-boxers.

I'm curious to know whether anybody would two-box in this scenario and why, and particularly curious in the reasoning of anybody whose answer is different between the original Newcomb problem and this one.

HPMOR and the Power of Consciousness

-3 Algernoq 25 November 2015 07:00AM

Throughout HPMOR, the author has included many fascinating details about how the real world works, and how to gain power. The Mirror of CEV seems like a lesson in what a true Friendly AI could look like and do.

I've got a weirder theory. (Roll for sanity...)

The entire story is plausible-deniability cover for explaining how to get the Law of Intention to work reliably.

(All quoted text is from HPMOR.)

This Mirror reflects itself perfectly and therefore its existence is absolutely stable. 

"This Mirror" is the Mind, or consciousness. The only thing a Mind can be sure of is that it is a Mind.

The Mirror's most characteristic power is to create alternate realms of existence, though these realms are only as large in size as what can be seen within the Mirror

A Mind's most characteristic power is to create alternate realms of existence, though these realms are only as large in size as what can be seen within the Mind.

Showing any person who steps before it an illusion of a world in which one of their desires has been fulfilled.

The final property upon which most tales agree, is that whatever the unknown means of commanding the Mirror - of that Key there are no plausible accounts - the Mirror's instructions cannot be shaped to react to individual people...the legends are unclear on what rules can be given, but I think it must have something to do with the Mirror's original intended use - it must have something to do with the deep desires and wishes arising from within the person.

More specifically, the Mirror shows a universe that obeys a consistent set of physical laws. From the set of all wish-fulfillment fantasies, it shows a universe that could actually plausibly exist.

It is known that people and other objects can be stored therein

Actors store other minds within their own Mind. Engineers store physical items within their Mind. The Mirror is a Mind.

the Mirror alone of all magics possesses a true moral orientation

The Mind alone of all the stuff that exists possesses a true moral orientation.

If that device had been completed, the story claimed, it would have become an absolutely stable existence that could withstand the channeling of unlimited magic in order to grant wishes. And also - this was said to be the vastly harder task - the device would somehow avert the inevitable catastrophes any sane person would expect to follow from that premise. 

An ideal Mind would grant wishes without creating catastrophes. Unfortunately, we're not quite ideal minds, even though we're pretty good.

Professor Quirrell made to walk away from the Mirrror, and seemed to halt just before reaching the point where the Mirror would no longer have reflected him, if it had been reflecting him.

My self-image can only go where it is reflected in my Mind. In other words, I can't imagine what it would be like to be a philosophical zombie.

Most powers of the Mirror are double-sided, according to legend. So you could banish what is on the other side of the Mirror instead. Send yourself, instead of me, into that frozen instant. If you wanted to, that is.

Let's interpret this scene: We've got a Mind/consciousness (the Mirror), we've got a self-image (Riddle) as well as the same spirit in a different self-image (Harry), and we've got a specific Extrapolated Volition instance in the mind (Dumbledore shown in the Mirror). This Extrapolated Volition instance is a consistent universe that could actually exist.

It sounds like the Process of the Timeless trap causes some Timeless Observer to choose one side of the Mirror as the real Universe, trapping the universe on the other side of the mirror in a frozen instant from the Timeless Observer's perspective.

The implication: the Mind has the power to choose which Universes it experiences from the set of all possible Universes extending from the current point.

All right, screw this nineteenth-century garbage. Reality wasn't atoms, it wasn't a set of tiny billiard balls bopping around. That was just another lie. The notion of atoms as little dots was just another convenient hallucination that people clung to because they didn't want to confront the inhumanly alien shape of the underlying reality. No wonder, then, that his attempts to Transfigure based on that hadn't worked. If he wanted power, he had to abandon his humanity, and force his thoughts to conform to the true math of quantum mechanics.

There were no particles, there were just clouds of amplitude in a multiparticle configuration space and what his brain fondly imagined to be an eraser was nothing except a gigantic factor in a wavefunction that happened to factorize, it didn't have a separate existence any more than there was a particular solid factor of 3 hidden inside the number 6, if his wand was capable of altering factors in an approximately factorizable wavefunction then it should damn well be able to alter the slightly smaller factor that Harry's brain visualized as a patch of material on the eraser -

Had to see the wand as enforcing a relation between separate past and future realities, instead of changing anything over time - but I did it, Hermione, I saw past the illusion of objects, and I bet there's not a single other wizard in the world who could have. 

This seems like another giant hint about magical powers.

"I had wondered if perhaps the Words of False Comprehension might be understandable to a student of Muggle science. Apparently not."

The author is disappointed that we don't get his hints. 

If the conscious mind was in reality a wish-granting machine, then how could I test this without going insane?

The Mirror of Perfect Reflection has power over what is reflected within it, and that power is said to be unchallengeable. But since the True Cloak of Invisibility produces a perfect absence of image, it should evade this principle rather than challenging it.

A method to test this seems to be to become aware of one's own ego-image (stand in front of the Mirror), vividly imagine a different ego-image without identifying with it (bring in a different personality containing the same Self under an Invisibility Cloak), suddenly switch ego-identification to the other personality (swap the Invisibility Cloak in less than a second), and then become distracted so the ego-switch becomes permanent (Dumbledore traps himself in the Mirror).

I can't think of a way to test this without sanity damage. Comments?

Creating lists

2 casebash 25 November 2015 04:41AM

Suppose you are trying to create a list. It may be of the "best" popular science books, or most controversial movies of the last twenty years, tips for getting over a breakup or the most interesting cat gifs posted in the last few days.

There are many reasons for wanting to create one of these lists, but only a few main simple methods:


  1. Voting model - This is the simplest model, but popularity doesn't always equal quality. It is also particularly problematic for regularly updated lists (like Reddit), where a constantly changing audience can result in large amounts of duplicate content and where easily consumable content has an advantage.
  2. Curator model - A single expect can often do an admirable job of collecting high-quality content, but this is subject to their own personal biases. It is also effort intensive to evaluate different curators to see if they have done a good job.
  3. Voting model with (content) rules - This can cut out the irrelevant or sugary content that is often upvoted, but creating good rules is hard. Often there is no objective line between high and low-quality content. These rules can often result in conflict.
  4. Voting model with sections - This is a solution to some of the limitations of 1 and 3. Instead of declaring some things off-topic outright, they can be thrown into their own section. This is the optimal solution, but is usually neglected.
  5. Voting model with selection - This covers any model where only certain people are allowed to vote. Sometimes selection is extraordinarily rigorous, however, it can still be very effective when it isn't. As an example, Metafilter charges a $5 one-time only fee and that is sufficient to keep the quality high.
The main point is that model 1 shouldn't automatically be selected. The other models have advantages too.


Mark Manson and Rationality

4 casebash 25 November 2015 03:34AM

As those of you on the Less Wrong chat may know, Mark Manson is my favourite personal development author. I thought I'd share those articles that are most related to rationality, as I figured that they would have the greatest chance of being appreciated.


Immediately after writing this article, I realised that I left one thing unclear, so I'll explain it now. Why have I included articles discussing the terms "life purpose" and "finding yourself"? The reason is that I think that it is very important to provide linguistic bridges between some of the vague everyday language that people often use and the more precise language expected by rationalists.


Why I’m wrong about everything (and so are you):

“When looked at from this perspective, personal development can actually be quite scientific. The hypotheses are our beliefs. Our actions and behaviors are the experiments. The resulting internal emotions and thought patterns are our data. We can then take those and compare them to our original beliefs and then integrate them into our overall understanding of our needs and emotional make-up for the future.”

“You test those beliefs out in the real world and get real-world feedback and emotional data from them. You may find that you, in fact, don’t enjoy writing every day as much as you thought you would. You may discover that you actually have a lot of trouble expressing some of your more exquisite thoughts than you first assumed. You realize that there’s a lot of failure and rejection involved in writing and that kind of takes the fun out of it. You also find that you spend more time on your site’s design and presentation than you do on the writing itself, that that is what you actually seem to be enjoying. And so you integrate that new information and adjust your goals and behaviors accordingly.”


7 strange questions that can help you find your life purpose:


Mark Manson deconstructs the notion of “life purpose”, replacing it with a question that is much more tractable:


“Part of the problem is the concept of “life purpose” itself. The idea that we were each born for some higher purpose and it’s now our cosmic mission to find it. This is the same kind of shitty logic used to justify things like spirit crystals or that your lucky number is 34 (but only on Tuesdays or during full moons).

Here’s the truth. We exist on this earth for some undetermined period of time. During that time we do things. Some of these things are important. Some of them are unimportant. And those important things give our lives meaning and happiness. The unimportant ones basically just kill time.

So when people say, “What should I do with my life?” or “What is my life purpose?” what they’re actually asking is: “What can I do with my time that is important?””

5 lessons from 5 years travelling the world:

While this isn’t the only way that the cliche of “finding yourself” can be broken down into something more understandable, it is quite a good attempt:

“Many people embark on journeys around the world in order to “find themselves.” In fact, it’s sort of cliché, the type of thing that sounds deep and important but doesn’t actually mean anything.

Whenever somebody claims they want to travel to “find themselves,” this is what I think they mean: They want to remove all of the major external influences from their lives, put themselves into a random and neutral environment, and then see what person they turn out to be.

By removing their external influences — the overbearing boss at work, the nagging mother, the pressure of a few unsavory friends — they’re then able to see how they actually feel about their life back home.

So perhaps a better way to put it is that you don’t travel to “find yourself,” you travel in order to get a more accurate perception of who you were back home, and whether you actually like that person or not.””

Love is not enough:

Mark Manson attacks one of the biggest myths in our society:

“In our culture, many of us idealize love. We see it as some lofty cure-all for all of life’s problems. Our movies and our stories and our history all celebrate it as life’s ultimate goal, the final solution for all of our pain and struggle. And because we idealize love, we overestimate it. As a result, our relationships pay a price.

When we believe that “all we need is love,” then like Lennon, we’re more likely to ignore fundamental values such as respect, humility and commitment towards the people we care about. After all, if love solves everything, then why bother with all the other stuff — all of the hard stuff?

But if, like Reznor, we believe that “love is not enough,” then we understand that healthy relationships require more than pure emotion or lofty passions. We understand that there are things more important in our lives and our relationships than simply being in love. And the success of our relationships hinges on these deeper and more important values.”


6 Healthy Relationship Habits Most People Think Are Toxic:


Edit: Read the warning in the comments


I included this article because of the discussion of the first habit.


"There’s this guy. His name is John Gottman. And he is like the Michael Jordan of relationship research. Not only has he been studying intimate relationships for more than 40 years, but he practically invented the field.

His “thin-slicing” process boasts a staggering 91% success rate in predicting whether newly-wed couples will divorce within 10 years — a staggeringly high result for any psychological research.


Gottman devised the process of “thin-slicing” relationships, a technique where he hooks couples up to all sorts of biometric devices and then records them having short conversations about their problems. Gottman then goes back and analyzes the conversation frame by frame looking at biometric data, body language, tonality and specific words chosen. He then combines all of this data together to predict whether your marriage sucks or not.

And the first thing Gottman says in almost all of his books is this: The idea that couples must communicate and resolve all of their problems is a myth."


I highly recommend these articles. They are based on research to an extent, but also upon his experiences, so they are not completely research based. If that's what you want, then you should try looking for a review article.

Attachment theory

The guide to happiness

The guide to self-discipline

Is it sensible for an ambitious nonsmoker to use e-cigarettes?

2 hg00 24 November 2015 10:48PM

Many of you have already seen Gwern's page on the topic of nicotine use. Nicotine is interesting because it's a stimulant, it may increase intelligence (I believe Daniel Kahneman said he was smarter back when he used to smoke), and it may be useful for habit formation.

However, the Cleveland Clinic thinks they put your heart at risk. This site covers some of the same research, and counterpoint is offered:

Elaine Keller, president of the CASAA, pointed to other recently published research that she said shows outcomes in the “real world” as opposed to a laboratory. One study showed that smokers put on nicotine replacement therapy after suffering an acute coronary event like a heart attack or stroke had no greater risk of a second incident within one year than those who were not.

I managed to get ahold of the study in question, and it seems to me that it damns e-cigarettes by faint praise. Based on a quick skim, researchers studied smokers who recently suffered an acute coronary syndrome (ACS). The treatment group was given e-cigarettes for nicotine replacement therapy, while the control group was left alone. Given that baseline success rates in quitting smoking are on the order of 10-20%, it seems safe to say that the control group mostly continued smoking as they had previously. (The study authors say "tobacco use during follow-up could not be accurately assessed because of the variability in documentation and, therefore, was not included in the present analysis", so we are left guessing.)

29% of the nicotine replacement group suffered an adverse event in the year following the intervention, and 31% of the control group did--similar numbers. So one interpretation of this study is that if you are a smoker in your fifties and you have already experienced an acute coronary syndrome, switching from cigarettes to e-cigs will do little to help you avoid further health issues in the next year. Doesn't exactly inspire confidence.

Another more recent article states that older smokers should see health gains from quitting cigarettes, which hammers the nail in further for e-cigarettes. It also states:

More conclusive answers about how e-cigarettes affect the body long-term are forthcoming, Rose said. Millions in research dollars are being funneled toward this topic.

“There is some poor science,” Rose said. “Everybody is trying to get something out quick in order to get funding.”

So based on this very cursory analysis I'm inclined to hold off until more research comes in. But these are just a few data points--I haven't read this government review which claims "e-cigarettes are about 95% less harmful than tobacco cigarettes", for example.

The broad issue I see is that most e-cigarette literature is focused on whether switching from cigarettes to e-cigarettes is a good idea, not whether using e-cigarettes as a nonsmoker is a good idea. I'm inclined to believe the first is true, but I'd hesitate to use research that proves the first to prove the second (as exemplified by the study I took a look at).

Anyway, if you're in the US and you want to buy e-cigarette products it may be best to do it soon before they're regulated out of existence.


The Winding Path

7 OrphanWilde 24 November 2015 09:23PM

The First Step

The first step on the path to truth is superstition.  We all start there, and should acknowledge that we start there.

Superstition is, contrary to our immediate feelings about the word, the first stage of understanding.  Superstition is the attribution of unrelated events to a common (generally unknown or unspecified) cause - it could be called pattern recognition. The "supernatural" component generally included in the definition is superfluous, because supernatural merely refers to that which isn't part of nature - which means reality -, which is an elaborate way of saying something whose relationship to nature is not yet understood, or else nonexistent.  If we discovered that ghosts are real, and identified an explanation - overlapping entities in a many-worlds universe, say - they'd cease to be supernatural and merely be natural.

Just as the supernatural refers to unexplained or imaginary phenomena, superstition refers to unexplained or imaginary relationships, without the necessity of cause.  If you designed an AI in a game which, after five rounds of being killed whenever it went into rooms with green-colored walls, started avoiding rooms with green-colored walls, you've developed a good AI.  It is engaging in superstition, it has developed an incorrect understanding of the issue.  But it hasn't gone down the wrong path - there is no wrong path in understanding, there is only the mistake of stopping.  Superstition, like all belief, is only useful if you're willing to discard it.

The Next Step

Incorrect understanding is the first - and necessary - step to correct understanding.  It is, indeed, every step towards correct understanding.  Correct understanding is a path, not an achievement, and it is pursued, not by arriving at the correct conclusion in the first place, but by testing your ideas and discarding those which are incorrect.

No matter how much intelligent you are, you cannot skip the "incorrect understanding" step of knowledge, because that is every step of knowledge.  You must come up with wrong ideas in order to get at the right ones - which will always be one step further.  You must test your ideas.  And again, the only mistake is stopping, in assuming that you have it right now.

Intelligence is never your bottleneck.  The ability to think faster isn't necessarily the ability to arrive at the right answer faster, because the right answer requires many wrong ones, and more importantly, identifying which answers are indeed wrong, which is the slow part of the process.

Better answers are arrived at by the process of invalidating wrong answers.

The Winding Path

The process of becoming Less Wrong is the process of being, in the first place, wrong.  It is the state of realizing that you're almost certainly incorrect about everything - but working on getting incrementally closer to an unachievable "correct".  It is a state of anti-hubris, and requires a delicate balance between the idea that one can be closer to the truth, and the idea that one cannot actually achieve it.

The art of rationality is the art of walking this narrow path.  If ever you think you have the truth - discard that hubris, for three steps from here you'll see it for superstition, and if you cannot see that, you cannot progress, and there your search for truth will end.  That is the path of the faithful.

But worse, the path is not merely narrow, but winding, with frequent dead ends requiring frequent backtracking.  If ever you think you're closer to the truth - discard that hubris, for it may inhibit you from leaving a dead end, and there your search for truth will end.  That is the path of the crank.

The path of rationality is winding and directionless.  It may head towards beauty, then towards ugliness; towards simplicity, then complexity.  The correct direction isn't the aesthetic one; those who head towards beauty may create great art, but do not find truth.  Those who head towards simplicity might open new mathematical doors and find great and useful things inside - but they don't find truth, either.  Truth is its own path, found only by discarding what is wrong.  It passes through simplicity, it passes through ugliness; it passes through complexity, and also beauty.  It doesn't belong to any one of these things.

The path of rationality is a path without destination.



Written as an experiment in the aesthetic of Less Wrong.  I'd appreciate feedback into the aesthetic interpretation of Less Wrong, rather than the sense of deep wisdom emanating from it (unless the deep wisdom damages the aesthetic).

Open thread, Nov. 23 - Nov. 29, 2015

5 MrMind 23 November 2015 07:59AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Some thoughts on decentralised prediction markets

-4 Clarity 23 November 2015 04:35AM

**Thought experiment 1 – arbitrage opportunities in prediction market**

You’re Mitt Romney, biding your time before riding in on your white horse to win the US republican presidential preselection (bear with me, I’m Australian and don’t know US politics). Anyway, you’ve had your run and you’re not too fussed, but some of the old guard want you back in the fight.

Playing out like a XKCD comic strip ‘Okay’, you scheme. ‘Maybe I can trump Trump at his own game and make a bit of dosh on the election’.

A data-scientist you keep on retainer sometimes talks about LessWrong and other dry things. One day she mentions that decentralised prediction markets are being developed, one of which is Augur. She says one can bet on the outcome of events such as elections.

You’ve made a fair few bucks in your day. You read the odd Investopedia page and a couple of random forum blog posts. And there’s that financial institute you run. Arbitrage opportunity, you think.

You don’t fancy your chance of winning the election. 40% chance, you reckon. So, you bet against yourself. Win the election, lose the bet. Lose the bet, win the election. Losing the election doesn’t mean much to you, losing the bet doesn’t mean much to you, winning the election means a lot of to you and winning the bet doesn’t mean much to you. There ya go. Perhaps if you put

Let’s turn this into a probability weighted decision table (game theory):

Not participating in prediction market:

Election win (+2 value)

Election lose (-1 value)



Cumulative probability weighted value: (0.4*2) + (0.6*-1)=+0.2 value

participating in prediction market::


Election win +2

Election lose -1

Bet win (0)



Bet lose (0)




Cumulative probability weighted value: (0.4*2) + (0.6*-1)=+0.2 value

They’re the same outcome!
Looks like my intuitions were wrong. Unless you value winning more than losing, then placing an additional bet, even in a different form of capital (cash v.s. political capital for instance), then taking on additional risks isn’t an arbitrage opportunity.

For the record, Mitt Romney probably wouldn’t make this mistake, but what does post suggest I know about prediction?


**Thought experiment 2 – insider trading**

Say you’re a C level executive in a publicly listed enterprise. However, for this example you don’t need to be part of a publicly listed organisatiion, but it serves to illustrate my intuitions. Say you have just been briefed by your auditors of massive fraud by a mid level manager that will devastate your company. Ordinarily, you may not know how to safely dump your stocks on the stock exchange because of several reasons, one of which is insider trading.

Now, on a prediction market, the executive could retain their stocks, thus not signalling distrust of the company themselves (which itself is information the company may be legally obliged to disclose since it materially influences share price) but make a bet on a prediction market of impending stock losses, thus hedging (not arbitraging, as demonstrated above) their bets.


**Thought experiment 3 – market efficiency**

I’d expect that prediction opportunities will be most popular where individuals weighted by their capital believe they gave private, market relevant information. For instance, if a prediction opportunity is that Canada’s prime minister says ‘I’m silly’ on his next TV appearance, many people might believe they know him personally well enough that it’s a higher probability that the otherwise absurd sounding proposition sounds. They may give it a 0.2% chance rather than 0.1% chance. However, if you are the prime minister yourself, you may decide to bet on this opportunity and make a quick, easy profit…I’m not sure where I was going with this anymore. But it was something about incentives to misrepresent how much relevant market information one has, and the amount that competitor betters have (people who bet WITH you)

[Link] A rational response to the Paris attacks and ISIS

-1 Gleb_Tsipursky 23 November 2015 01:47AM

Here's my op-ed that uses long-term orientation, probabilistic thinking, numeracy, consider the alternative, reaching our actual goals, avoiding intuitive emotional reactions and attention bias, and other rationality techniques to suggest more rational responses to the Paris attacks and the ISIS threat. It's published in the Sunday edition of The Plain Dealer​, a major newspaper (16th in the US). This is part of my broader project, Intentional Insights, of conveying rational thinking, including about politics, to a broad audience to raise the sanity waterline.

[Link] Less Wrong Wiki article with very long summary of Daniel Kahneman's Thinking, Fast and Slow

6 Gleb_Tsipursky 22 November 2015 04:32PM

I've made very extensive notes, along with my assessment, of Daniel Kahneman's Thinking, Fast and Slow, and have passed it around to aspiring rationalist friends who found my notes very useful. So I though I would share these with the Less Wrong community by creating a Less Wrong Wiki article with these notes. Feel free to optimize the article based on your own notes as well. Hope this proves as helpful to you as it did to those others whom I shared my notes with.



A map: Causal structure of a global catastrophe

4 turchin 21 November 2015 04:07PM

New LW Meetup: Cambridge UK

2 FrankAdamek 20 November 2015 04:52PM

This summary was posted to LW Main on November 13th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, London, Madison WI, Melbourne, Moscow, Mountain View, New Hampshire, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.

continue reading »

Goal setting journal (November)

2 Clarity 20 November 2015 07:54AM

Inspired by the group rationality diary and open thread, this is the second goal setting journal (GSJ) thread.

If you have a goal worth setting then it goes here.


Notes for future GSJ posters:

1. Please add the 'gsj' tag.

2. Check if there is an active GSJ thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. GSJ Threads should be posted in Discussion, and not Main.

4. GSJ Threads should run for no longer than 1 week, but you may set goals, subgoals and tasks for as distant into the future as you please.

5. No one is in charge of posting these threads. If it's time for a new thread, and you want a new thread, just create it.

Stupid Questions November 2015

4 Tem42 19 November 2015 10:36PM

This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.

Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.

To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.

Rationality Reading Group: Part N: A Human's Guide to Words

7 Gram_Stone 18 November 2015 11:50PM

This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.

Welcome to the Rationality reading group. This fortnight we discuss Part N: A Human's Guide to Words (pp. 677-801) and Interlude: An Intuitive Explanation of Bayes's Theorem (pp. 803-826). This post summarizes each article of the sequence, linking to the original LessWrong post where available.

N. A Human's Guide to Words

153. The Parable of the Dagger - A word fails to connect to reality in the first place. Is Socrates a framster? Yes or no?

154. The Parable of Hemlock - Your argument, if it worked, could coerce reality to go a different way by choosing a different word definition. Socrates is a human, and humans, by definition, are mortal. So if you defined humans to not be mortal, would Socrates live forever?

You try to establish any sort of empirical proposition as being true "by definition". Socrates is a human, and humans, by definition, are mortal. So is it a logical truth if we empirically predict that Socrates should keel over if he drinks hemlock? It seems like there are logically possible, non-self-contradictory worlds where Socrates doesn't keel over - where he's immune to hemlock by a quirk of biochemistry, say. Logical truths are true in all possible worlds, and so never tell you which possible world you live in - and anything you can establish "by definition" is a logical truth.

You unconsciously slap the conventional label on something, without actually using the verbal definition you just gave. You know perfectly well that Bob is "human", even though, on your definition, you can never call Bob "human" without first observing him to be mortal.

155. Words as Hidden Inferences - The mere presence of words can influence thinking, sometimes misleading it.

The act of labeling something with a word, disguises a challengable inductive inference you are making. If the last 11 egg-shaped objects drawn have been blue, and the last 8 cubes drawn have been red, it is a matter of induction to say this rule will hold in the future. But if you call the blue eggs "bleggs" and the red cubes "rubes", you may reach into the barrel, feel an egg shape, and think "Oh, a blegg."

156. Extensions and Intensions - You try to define a word using words, in turn defined with ever-more-abstract words, without being able to point to an example. "What is red?" "Red is a color." "What's a color?" "It's a property of a thing?" "What's a thing? What's a property?" It never occurs to you to point to a stop sign and an apple.

The extension doesn't match the intension. We aren't consciously aware of our identification of a red light in the sky as "Mars", which will probably happen regardless of your attempt to define "Mars" as "The God of War".

157. Similarity Clusters - Your verbal definition doesn't capture more than a tiny fraction of the category's shared characteristics, but you try to reason as if it does. When the philosophers of Plato's Academy claimed that the best definition of a human was a "featherless biped", Diogenes the Cynic is said to have exhibited a plucked chicken and declared "Here is Plato's Man." The Platonists promptly changed their definition to "a featherless biped with broad nails".

158. Typicality and Asymmetrical Similarity - You try to treat category membership as all-or-nothing, ignoring the existence of more and less typical subclusters. Ducks and penguins are less typical birds than robins and pigeons. Interestingly, a between-groups experiment showed that subjects thought a disease was more likely to spread from robins to ducks on an island, than from ducks to robins.

159. The Cluster Structure of Thingspace - A verbal definition works well enough in practice to point out the intended cluster of similar things, but you nitpick exceptions. Not every human has ten fingers, or wears clothes, or uses language; but if you look for an empirical cluster of things which share these characteristics, you'll get enough information that the occasional nine-fingered human won't fool you.

160. Disguised Queries - You ask whether something "is" or "is not" a category member but can't name the question you really want answered. What is a "man"? Is Barney the Baby Boy a "man"? The "correct" answer may depend considerably on whether the query you really want answered is "Would hemlock be a good thing to feed Barney?" or "Will Barney make a good husband?"

161. Neural Categories - You treat intuitively perceived hierarchical categories like the only correct way to parse the world, without realizing that other forms of statistical inference are possible even though your brain doesn't use them. It's much easier for a human to notice whether an object is a "blegg" or "rube"; than for a human to notice that red objects never glow in the dark, but red furred objects have all the other characteristics of bleggs. Other statistical algorithms work differently.

162. How An Algorithm Feels From Inside - You talk about categories as if they are manna fallen from the Platonic Realm, rather than inferences implemented in a real brain. The ancient philosophers said "Socrates is a man", not, "My brain perceptually classifies Socrates as a match against the 'human' concept".

You argue about a category membership even after screening off all questions that could possibly depend on a category-based inference. After you observe that an object is blue, egg-shaped, furred, flexible, opaque, luminescent, and palladium-containing, what's left to ask by arguing, "Is it a blegg?" But if your brain's categorizing neural network contains a (metaphorical) central unit corresponding to the inference of blegg-ness, it may still feel like there's a leftover question.

163. Disputing Definitions - You allow an argument to slide into being about definitions, even though it isn't what you originally wanted to argue about. If, before a dispute started about whether a tree falling in a deserted forest makes a "sound", you asked the two soon-to-be arguers whether they thought a "sound" should be defined as "acoustic vibrations" or "auditory experiences", they'd probably tell you to flip a coin. Only after the argument starts does the definition of a word become politically charged.

164. Feel the Meaning - You think a word has a meaning, as a property of the word itself; rather than there being a label that your brain associates to a particular concept. When someone shouts, "Yikes! A tiger!", evolution would not favor an organism that thinks, "Hm... I have just heard the syllables 'Tie' and 'Grr' which my fellow tribemembers associate with their internal analogues of my owntiger concept and which aiiieeee CRUNCH CRUNCH GULP." So the brain takes a shortcut, and it seems that the meaning of tigerness is a property of the label itself. People argue about the correct meaning of a label like "sound".

165. The Argument from Common Usage - You argue over the meanings of a word, even after all sides understand perfectly well what the other sides are trying to say. The human ability to associate labels to concepts is a tool for communication. When people want to communicate, we're hard to stop; if we have no common language, we'll draw pictures in sand. When you each understand what is in the other's mind, you are done.

You pull out a dictionary in the middle of an empirical or moral argument. Dictionary editors are historians of usage, not legislators of language. If the common definition contains a problem - if "Mars" is defined as the God of War, or a "dolphin" is defined as a kind of fish, or "Negroes" are defined as a separate category from humans, the dictionary will reflect the standard mistake.

You pull out a dictionary in the middle of any argument ever. Seriously, what the heck makes you think that dictionary editors are an authority on whether "atheism" is a "religion" or whatever? If you have any substantive issue whatsoever at stake, do you really think dictionary editors have access to ultimate wisdom that settles the argument?

You defy common usage without a reason, making it gratuitously hard for others to understand you. Fast stand up plutonium, with bagels without handle.

166. Empty Labels - You use complex renamings to create the illusion of inference. Is a "human" defined as a "mortal featherless biped"? Then write: "All [mortal featherless bipeds] are mortal; Socrates is a [mortal featherless biped]; therefore, Socrates is mortal." Looks less impressive that way, doesn't it?

167. Taboo Your Words - If Albert and Barry aren't allowed to use the word "sound", then Albert will have to say "A tree falling in a deserted forest generates acoustic vibrations", and Barry will say "A tree falling in a deserted forest generates no auditory experiences". When a word poses a problem, the simplest solution is to eliminate the word and its synonyms.

168. Replace the Symbol with the Substance - The existence of a neat little word prevents you from seeing the details of the thing you're trying to think about. What actually goes on in schools once you stop calling it "education"? What's a degree, once you stop calling it a "degree"? If a coin lands "heads", what's its radial orientation? What is "truth", if you can't say "accurate" or "correct" or "represent" or "reflect" or "semantic" or "believe" or "knowledge" or "map" or "real" or any other simple term?

169. Fallacies of Compression - You have only one word, but there are two or more different things-in-reality, so that all the facts about them get dumped into a single undifferentiated mental bucket. It's part of a detective's ordinary work to observe that Carol wore red last night, or that she has black hair; and it's part of a detective's ordinary work to wonder if maybe Carol dyes her hair. But it takes a subtler detective to wonder if there are two Carols, so that the Carol who wore red is not the same as the Carol who had black hair.

170. Categorizing Has Consequences - You see patterns where none exist, harvesting other characteristics from your definitions even when there is no similarity along that dimension. In Japan, it is thought that people of blood type A are earnest and creative, blood type Bs are wild and cheerful, blood type Os are agreeable and sociable, and blood type ABs are cool and controlled.

171. Sneaking in Connotations - You try to sneak in the connotations of a word, by arguing from a definition that doesn't include the connotations. A "wiggin" is defined in the dictionary as a person with green eyes and black hair. The word "wiggin" also carries the connotation of someone who commits crimes and launches cute baby squirrels, but that part isn't in the dictionary. So you point to someone and say: "Green eyes? Black hair? See, told you he's a wiggin! Watch, next he's going to steal the silverware."

172. Arguing "By Definition" - You claim "X, by definition, is a Y!" On such occasions you're almost certainly trying to sneak in a connotation of Y that wasn't in your given definition. You define "human" as a "featherless biped", and point to Socrates and say, "No feathers - two legs - he must be human!" But what you really care about is something else, like mortality. If what was in dispute was Socrates's number of legs, the other fellow would just reply, "Whaddaya mean, Socrates's got two legs? That's what we're arguing about in the first place!"

You claim "Ps, by definition, are Qs!" If you see Socrates out in the field with some biologists, gathering herbs that might confer resistance to hemlock, there's no point in arguing "Men, by definition, are mortal!" The main time you feel the need to tighten the vise by insisting that something is true "by definition" is when there's other information that calls the default inference into doubt.

You try to establish membership in an empirical cluster "by definition". You wouldn't feel the need to say, "Hinduism, by definition, is a religion!" because, well, of course Hinduism is a religion. It's not just a religion "by definition", it's, like, an actual religion. Atheism does not resemble the central members of the "religion" cluster, so if it wasn't for the fact that atheism is a religion by definition, you might go around thinking that atheism wasn't a religion. That's why you've got to crush all opposition by pointing out that "Atheism is a religion" is true by definition, because it isn't true any other way.

173. Where to Draw the Boundary? - Your definition draws a boundary around things that don't really belong together. You can claim, if you like, that you are defining the word "fish" to refer to salmon, guppies, sharks, dolphins, and trout, but not jellyfish or algae. You can claim, if you like, that this is merely a list, and there is no way a list can be "wrong". Or you can stop playing nitwit games and admit that you made a mistake and that dolphins don't belong on the fish list.

174. Entropy, and Short Codes - You use a short word for something that you won't need to describe often, or a long word for something you'll need to describe often. This can result in inefficient thinking, or even misapplications of Occam's Razor, if your mind thinks that short sentences sound "simpler". Which sounds more plausible, "God did a miracle" or "A supernatural universe-creating entity temporarily suspended the laws of physics"?

175. Mutual Information, and Density in Thingspace - You draw your boundary around a volume of space where there is no greater-than-usual density, meaning that the associated word does not correspond to any performable Bayesian inferences. Since green-eyed people are not more likely to have black hair, or vice versa, and they don't share any other characteristics in common, why have a word for "wiggin"?

176. Superexponential Conceptspace, and Simple Words - You draw an unsimple boundary without any reason to do so. The act of defining a word to refer to all humans, except black people, seems kind of suspicious. If you don't present reasons to draw that particular boundary, trying to create an "arbitrary" word in that location is like a detective saying: "Well, I haven't the slightest shred of support one way or the other for who could've murdered those orphans... but have we considered John Q. Wiffleheim as a suspect?"

177. Conditional Independence, and Naive Bayes - You use categorization to make inferences about properties that don't have the appropriate empirical structure, namely, conditional independence given knowledge of the class, to be well-approximated by Naive Bayes. No way am I trying to summarize this one. Just read the blog post.

178. Words as Mental Paintbrush Handles - You think that words are like tiny little LISP symbols in your mind, rather than words being labels that act as handles to direct complex mental paintbrushes that can paint detailed pictures in your sensory workspace. Visualize a "triangular lightbulb". What did you see?

179. Variable Question Fallacies - You use a word that has different meanings in different places as though it meant the same thing on each occasion, possibly creating the illusion of something protean and shifting."Martin told Bob the building was on his left." But "left" is a function-word that evaluates with a speaker-dependent variable grabbed from the surrounding context. Whose "left" is meant, Bob's or Martin's?

180. 37 Ways That Words Can Be Wrong - Contains summaries of the sequence of posts about the proper use of words.

Interlude: An Intuitive Explanation of Bayes's Theorem - Exactly what it says on the tin.


This has been a collection of notes on the assigned sequence for this fortnight. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover The World: An Introduction (pp. 834-839) and Part O: Lawful Truth (pp. 843-883). The discussion will go live on Wednesday, 2 December 2015, right here on the discussion forum of LessWrong.

The Market for Lemons: Quality Uncertainty on Less Wrong

7 signal 18 November 2015 10:06PM

Tl;dr: Articles on LW are, if unchecked (for now by you), heavily distorting a useful view (yours) on what matters.


[This is (though in part only) a five-year update to Patrissimo’s article Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality. However, I wrote most of this article before I became aware of its predecessor. Then again, this reinforces both our articles' main critique.]


I claim that rational discussions in person, conferences, forums, social media, and blogs suffer from adverse selection and promote unwished-for phenomena such as the availability heuristic. Bluntly stated, they do (as all other discussions) have a tendency to support ever worse, unimportant, or wrong opinions and articles. More importantly, articles of high relevancy regarding some topics are conspicuously missing. This can be also observed on Less Wrong. It is not the purpose of this article to determine the exact extent of this problem. It shall merely bring to attention that “what you get is not what you should see." However, I am afraid this effect is largely undervalued.


This result is by design and therefore to be expected. A rational agent will, by definition, post incorrect, incomplete, or not at all in the following instances:

  • Cost-benefit analysis: A rational agent will not post information that reduces his utility by enabling others to compete better and, more importantly, by causing him any effort unless some gain (status, monetary, happiness,…) offsets the former effect. Example: Have you seen articles by Mark Zuckerberg? But I also argue that for random John Doe the personal cost-benefit-analysis from posting an article is negative. Even more, the value of your time should approach infinity if you really drink the LW Kool-Aid, however, this shall be the topic of a subsequent article. I suspect the theme of this article may also be restated as a free-riding problem as it postulates the non-production or under-production of valuable articles and other contributions.
  • Conflicting with law: Topics like drugs (in the western world) and maybe politics or sexuality in other parts of the world are biased due to the risk of persecution, punishment, extortion, etc. And many topics such as in the spheres of rationality, transhumanism, effective altruism, are at least highly sensitive, especially when you continue arguing until you reach their moral extremes.
  • Inconvenience of disagreement: Due to the effort of posting truly anonymously (which currently requires a truly anonymous e-mail address and so forth), disagreeing posts will be avoided, particularly when the original poster is of high status and the risk to rub off on one’s other articles thus increased. This is obviously even truer for personal interactions. Side note: The reverse situation may also apply: more agreement (likes) with high status.
  • Dark knowledge: Even if I know how to acquire a sniper gun that cannot be traced, I will not share this knowledge (as for all other reasons, there are substantially better examples, but I do not want to make spreading dark knowledge a focus of this article).
  • Signaling: Seriously, would you discuss your affiliation to LW in a job interview?! Or tell your friends that you are afraid we live in a simulation? (If you don’t see my point, your rationality is totally off base, see the next point). LW user “Timtyler” commented before: “I also found myself wondering why people remained puzzled about the high observed levels of disagreement. It seems obvious to me that people are poor approximations of truth-seeking agents—and instead promote their own interests. If you understand that, then the existence of many real-world disagreements is explained: people disagree in order to manipulate the opinions and actions of others for their own benefit.”
  • WEIRD-M-LW: It is a known problem that articles on LW are going to be written by authors that are in the overwhelming majority western, educated, industrialized, rich, democratic, and male. The LW surveys show distinctly that there are most likely many further attributes in which the population on LW differs from the rest of the world. LW user “Jpet” argued in a comment very nicely: “But assuming that the other party is in fact totally rational is just silly. We know we're talking to other flawed human beings, and either or both of us might just be totally off base, even if we're hanging around on a rationality discussion board.” LW could certainly use more diversity. Personal anecdote: I was dumbfounded by the current discussion around LW T-shirts sporting slogans such as "Growing Mentally Stronger" which seemed to me intuitively highly counterproductive. I then asked my wife who is far more into fashion and not at all into LW. Her comment (Crocker's warning): “They are great! You should definitely buy one for your son if you want him to go to high school and to be all for himself for the next couple of years; that is, except for the mobbing, maybe.”
  • Genes, minds, hormones & personal history: (Even) rational agents are highly influenced by those factors. This fact seems underappreciated. Think of SSC's "What universal human experiences are you missing without realizing it?" Think of inferential distances and the typical mind fallacy. Think of slight changes in beliefs after drinking coffee, been working out, deeply in love for the first time/seen your child born, being extremely hungry, wanting to and standing on the top of the mountain (especially Mt. Everest). Russell pointed out the interesting and strong effect of Schopenhauer’s and Nietzsche’s personal history on their misogyny. However, it would be a stretch to simply call them irrational. In every discussion, you have to start somewhere, but finding a starting point is a lot more difficult when the discussion partners are more diverse. All factors may not result in direct misinformation on LW but certainly shape the conversation (see also the next point).
  • Priorities: Specific “darlings” of the LW sphere such as Newcomb’s paradox or MW are regularly discussed. Just one moment of not paying bias attention, and you may assume they are really relevant. For those of us currently not programming FAI, they aren’t and steal attention from more important issues.
  • Other beliefs/goals: Close to selfishness, but not quite the same. If an agent’s beliefs and goals differ from most others, the discussion would benefit from your post. Even so, that by itself may not be a sufficient reason for an agent to post. Example: Imagine somebody like Ben Goertzel. His beliefs on AI, for instance, differed from the mainstream on LW. This did not necessarily result in him posting an article on LW. And to my knowledge, he won’t, at least not directly. Plus, LW may try to slow him down as he seems less concerned about the F of FAI.
  • Vanity: Considering the amount of self-help threads, nerdiness, and alike on LW, it may be suspected that some refrain from posting due to self-respect. E.g. I do not want to signal myself that I belong to this tribe. This may sound outlandish but then again, have a look at the Facebook groups of LW and other rationalists where people ask frequently how they can be more interesting, or how “they can train how to pause for two seconds before they speak to increase their charisma." Again, if this sounds perfectly fine to you, that may be bad news.
  • Barriers to entry: Your first post requires creating an account. Karma that signals the quality of your post is still absent. An aspiring author may question the relative importance of his opinion (especially for highly complex topics), his understanding of the problem, the quality of his writing, and if his research on the chosen topic is sufficient.
  • Nothing new under the sun: Writing an article requires the bold assumption that its marginal utility is significantly above zero. The likelihood of which probably decreases with the number of posts, which is, as of now, quite impressive. Patrissimo‘s article (footnote [10]) addresses the same point, others mention being afraid of "reinventing the wheel."
  • Error: I should point out that most of the reasons brought forward in this list talk about deliberate misinformation. In many cases, an article will just be wrong which the author does not realize. Examples: facts (the earth is flat), predications (planes cannot fly), and, seriously underestimated, horizon effects (if more information is provided the rational agent realizes that his action did not yield the desired outcome, e.g. ban of plastic bags).
  • Protection of the group: Opinions though being important may not be discussed to protect the group or its image to outsiders. See “is LW a c***” and Roko’s ***." This argument can also be brought forward much more subtle: an agent may, for example, hold the opinion that rationality concepts are information hazards by nature if they reduce the happiness of the otherwise blissfully unaware.
  • Topicality: This is a problem specific to LW. Many of the great posts as well as the sequences have originated about five to ten years ago. While the interest in AI has now reached mainstream awareness, the solid intellectual basis (centered around a few individuals) which LW offered seems to break away gradually and rationality topics experience their diaspora. What remains is a less balanced account of important topics in the sphere of rationality and new authors are discouraged to enter the conversation.
  • Russell’s antinomy: Is the contribution that states its futility ever expressed? Random example article title: “Writing articles on LW is useless because only nerds will read them."
  • +Redundancy: If any of the above reasons apply, I may choose not to post. However, I also expect a rational agent with sufficiently close knowledge to attain the same knowledge himself so it is at the same time not absolutely necessary to post. An article will “only” speed up the time required to understand a new concept and reduce the likelihood of rationalists diverting due to disagreement (if Aumann is ignored) or faulty argumentation.

This list is not exhaustive. If you do not find a factor in this list that you expect to accounts for much of the effect, I will appreciate a hint in the comments.


There are a few outstanding examples pointing in the opposite direction. They appear to provide uncensored accounts of their way of thinking and take arguments to their logical extremes when necessary. Most notably Bostrom and Gwern, but then again, feel free to read the latter’s posts on endured extortion attempts.


A somewhat flippant conclusion (more in a FB than LW voice): After reading the article from 2010, I cannot expect this article (or the ones possibly following that have already been written) to have a serious impact. It thus can be concluded that it should not have been written. Then again, observing our own thinking patterns, we can identify influences of many thinkers who may have suspected the same (hubris not intended). And step by step, we will be standing on the shoulders of giants. At the same time, keep in mind that articles from LW won’t get you there. They represent only a small piece of the jigsaw. You may want to read some, observe how instrumental rationality works in the “real world," and, finally, you have to draw the critical conclusions for yourself. Nobody truly rational will lay them out for you. LW is great if you have an IQ of 140 and are tired of superficial discussions with the hairstylist in your village X. But keep in mind that the instrumental rationality of your hairstylist may still surpass yours, and I don’t even need to say much about the one of your president, business leader, and club Casanova. And yet, they may be literally dead wrong, because they have overlooked AI and SENS.


A final personal note: Kudos to the giants for building this great website and starting point for rationalists and the real-life progress in the last couple of years! This is a rather skeptical article to start with, but it does have its specific purpose of laying out why I, and I suspect many others, almost refrained from posting.



[Link] Audio recording of Stephen Wolfram discussing AI and the Singularity

1 RaelwayScot 18 November 2015 09:41PM

Marketing Rationality

25 Viliam 18 November 2015 01:43PM

What is your opinion on rationality-promoting articles by Gleb Tsipursky / Intentional Insights? Here is what I think:

continue reading »

Open thread, Nov. 16 - Nov. 22, 2015

7 MrMind 16 November 2015 08:03AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

[Link] Lifehack Article Promoting LessWrong, Rationality Dojo, and Rationality: From AI to Zombies

9 Gleb_Tsipursky 14 November 2015 08:34PM

Nice to get this list-style article promoting LessWrong, Rationality Dojo, and Rationality: From AI to Zombies, as part of a series of strategies for growing mentally stronger, published on Lifehack, a very popular self-improvement website. It's part of my broader project of promoting rationality and effective altruism to a broad audience, Intentional Insights.


EDIT: To be clear, based on my exchange with gjm below, the article does not promote these heavily and links more to Intentional Insights. I was excited to be able to get links to LessWrong, Rationality Dojo, and Rationality: From AI to Zombies included in the Lifehack article, as previously editors had cut out such links. I pushed back against them this time, and made a case for including them as a way of growing mentally stronger, and thus was able to get them in.

Weekly LW Meetups

0 FrankAdamek 13 November 2015 04:31PM

Reflexive self-processing is literally infinitely simpler than a many world interpretation

-9 mgin 13 November 2015 02:46PM

I recently stumbled upon the concept of "reflexive self-processing", which is Chris Langan's "Reality Theory".

I am not a physicist, so if I'm wrong or someone can better explain this, or if someone wants to break out the math here, that would be great.

The idea of reflexive self-processing is that in the double slit experiment for example, which path the photon takes is calculated by taking into account the entire state of the universe when it solves the wave function.

1. isn't this already implied by the math of how we know the wave function works? are there any alternate theories that are even consistent with the evidence?

2. don't we already know that the entire state of the universe is used to calculate the behavior of particles? for example, doesn't every body produce a gravitational field which acts, with some magntitude of force, at any distance, such that in order to calculate the trajectory of a particle to the nth decimal place, you would need to know about every other body in the universe?

This is, literally, infinitely more parsimonious than the many worlds theory, which posits that an infinite number of entire universes of complexity are created at the juncture of every little physical event where multiple paths are possible. Supporting MWI because of it's simplicity was always a really horrible argument for this reason, and it seems like we do have a sensible, consistent theory in this reflexive self-processing idea, which is infinitely simpler, and therefore should be infinitely preferred by a rationalist to MWI.

Optimizing Rationality T-shirts

3 Gleb_Tsipursky 12 November 2015 10:15PM

Thanks again for all the feedback on the first set of Rationality slogan t-shirts, which Intentional Insights developed as part of our  broader project of promoting rationality to a wide audience. As a reminder, the t-shirts are meant for aspiring rationalists to show their affiliation with rationality, to remind themselves and other aspiring rationalists to improve, and to spread positive memes broadly. All profits go to promoting rationality widely.


For the first set, we went with a clear and minimal style that conveyed the messages clearly and had an institutional affiliation, based on the advice Less Wrongers gave earlier. While some liked and bought these, plenty wanted something more stylish and designed. As an aspiring rationalist, I am glad to update my beliefs. So we are going back to the drawing board, and trying to design something more stylish.


Now, we are facing the limitation of working with a print on demand service. We need to go with POD as we can't afford to buy shirts and then sell them, it would cost way too much to do so. We decided on CafePress as the most popular and well-known service with the most variety of options. It does limit our ability to design things, though.


So for the next step, we got some aspiring rationalist volunteers for Intentional Insights to find a number of t-shirt designs they liked, and we will create t-shirts that use designs of that style, but with rationality slogans. I'd like to poll fellow Less Wrongers for which designs they like most among the ones found by our volunteers. I will list links below associated with numbers, and in comments, please indicate the t-shirt numbers that you liked best, so that we can make those. Also please link to other shirts you like, or make any other comments on t-shirt designs and styles.


1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17


Thanks all for collaborating on optimizing rationality t-shirts!




Post-doctoral Fellowships at METRICS

12 Anders_H 12 November 2015 07:13PM
The Meta-Research Innovation Center at Stanford (METRICS) is hiring post-docs for 2016/2017. The full announcement is available at http://metrics.stanford.edu/education/postdoctoral-fellowships. Feel free to contact me with any questions; I am currently a post-doc in this position.

METRICS is a research center within Stanford Medical School. It was set up to study the conditions under which the scientific process can be expected to generate accurate beliefs, for instance about the validity of evidence for the effect of interventions.

METRICS was founded by Stanford Professors Steve Goodman and John Ioannidis in 2014, after Givewell connected them with the Laura and John Arnold Foundation, who provided the initial funding. See http://blog.givewell.org/2014/04/23/meta-research-innovation-centre-at-stanford-metrics/ for more details.

Meetup: Cambridge UK

2 Salokin 11 November 2015 08:08PM

(Apparently just posting a new meetup doesn't provide much visibility, so I'm posting a discussion article too.)

WHEN: 15 November 2015 05:00:00PM (+0000)

WHERE: JCR Trinity College, Cambridge, UK

First Cambridge meetup in a long time! Hopefully of many. Come to Trinity's JCR at 17 next sunday and get to know all the other aspiring rationalists around and have a good time! (Place and time are only provisional, they might change depending on your availability so comment here to see how we can arrange it properly)

[Link] Mainstreaming Tell Culture

-1 Gleb_Tsipursky 11 November 2015 06:06PM

Mainstreaming Tell Culture and other rational relationship strategies in this listicle for Lifehack, a very popular self-improvement website, as part of my broader project, Intentional Insights​, of promoting rationality and science-based thinking to a broad audience. What are your thoughts about this piece?

Link: The Cook and the Chef: Musk's Secret Sauce - Wait But Why

3 taygetea 11 November 2015 05:46AM

This is the fourth of Tim Urban's series on Elon Musk, and this time it's about some reasoning processes that are made explicit, which LW readers should find very familiar. It's a potentially useful explicit model of how to make decisions for yourself.


View more: Next