Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Don't You Care If It Works? - Part 1

3 Jacobian 29 July 2015 02:32PM

 

Part 1 - Epistemic


Prologue - other people

Psychologists at Harvard showed that most people have implicit biases about several groups. Some other Harvard psychologists were subjects of this study proving that psychologists undervalue CVs with female names. All Harvard psychologists have probably heard about the effect of black names on resumes since even we have. Surely every psychology department in this country starting with Harvard will only review CVs with the names removed? Fat chance.


Caveat lector et scriptor

A couple weeks ago I wrote a poem that makes aspiring rationalists feel better about themselves. Today I'm going to undo that. Disclaimers: This is written with my charity meter set to 5%. Every other paragraph is generalizing from anecdotes and typical-mind-fallacying. A lot of the points I make were made before and better. You should really close this tab and read those other links instead, I won't judge you. I'm not going to write in an academic style with a bibliography at the end, I'm going to write in the sarcastic style my blog would have if I weren't too lazy to start one. I'm also not trying to prove any strong empirical claims, this is BYOE: bring your own evidence. Imagine every sentence starting with "I could be totally wrong" if it makes it more digestible. Inasmuch as any accusations in this post are applicable, they apply to me as well. My goal is to get you worried, because I'm worried. If you read this and you're not worried, you should be. If you are, good!


Disagree to disagree

Edit: in the next paragraph, "Bob" was originally an investment advisor. My thanks to 2irons and Eliezer who pointed out why this is literally the worst example of a job I could give to argue my point.

Is 149 a prime? Take as long as you need to convince yourself (by math or by Google) that it is. Is it unreasonable to have 99.9...% confidence with quite a few nines (and an occasional 7) in there? Now let's say that you have a tax accountant, Bob, a decent guy that seems to be doing a decent job filing your taxes. You start chatting with Bob and he reveals that he's pretty sure that 149 isn't a prime. He doesn't know two numbers whose product is 149, it just feels unprimely to him. You try to reason with him, but he just chides you for being so arrogant in your confidence: can't you just agree to disagree on this one? It's not like either of you is a numbers theorist. His job is to not get you audited by the IRS, which he does, not factorize numbers. Are you a little bit worried about trusting Bob with your taxes? What if he actually claimed to be a mathematician?

A few weeks ago I started reading beautiful probability and immediately thought that Eliezer is wrong about the stopping rule mattering to inference. I dropped everything and spent the next three hours convincing myself that the stopping rule doesn't matter and I agree with Jaynes and Eliezer. As luck would have it, soon after that the stopping rule question was the topic of discussion at our local LW meetup. A couple people agreed with me and a couple didn't and tried to prove it with math, but most of the room seemed to hold a third opinion: they disagreed but didn't care to find out. I found that position quite mind-boggling. Ostensibly, most people are in that room because we read the sequences and thought that this EWOR (Eliezer's Way Of Rationality) thing is pretty cool. EWOR is an epistemology based on the mathematical rules of probability, and the dude who came up with it apparently does mathematics for a living trying to save the world. It doesn't seem like a stretch to think that if you disagree with Eliezer on a question of probability math, a question that he considers so obvious it requires no explanation, that's a big frickin' deal!


Authority screens off that other authority you heard from afterwards

 Opinion change

This is a chart that I made because I got excited about learning ggplot2 in R. On the right side of the chart are a lot bright red dots below the very top who believe in MIRI but also read the quantum physics sequence and don't think that MWI is very likely. Some of them understood the question of P(MWI) to be about whether MWI is the one and only exact truth, but I'm sure that several of them read it the way I did, roughly as: 1-P(collapse is true given current evidence). A lot of these people are congratulating themselves on avoiding cultishness. In the comments they mention other bloggers (or maybe even physicists!) who think that collapse is totally Beatles and MWI is Bieber.

Hold on, why did Eliezer even take all this time to write a huge quantum physics sequence? Here's how I see it: It's not to settle a point about some scientific dispute. It's to show that a person with some spare time and a little above average IQ can figure out something beyond people of massive authority, on the very subject of their authority, even if that subjects is as scary as QM. You know what’s the median IQ of the dots on above chart? 139 points! 139!!! And you know who has spare time? Someone who read 100% of the sequences!

Why should you even care about this? Because Eliezer does. He's so fanatically confident about MWI not for its own sake, it's because he's saying (at least to my ears): "This is my EWOR epistemology. It may not make you skinny and rich, but if it will do just one thing for you it will prevent you from being very very confident in beliefs that could be proven wrong. So if I'm very very confident about a question of science or math, I'm sticking my neck out and betting my entire epistemology on being right, so you can check it for yourself". In fact, Eliezer will totally judge your epistemology based on your belief in MWI. You won't do the same to him when he asks you to?

I'm a bright red dot on the chart above. Here are two statements I believe from authority because it is beyond my cognitive powers and domain knowledge to assess them as arguments: "strong AI will break out of any box humans can design", "CEV is the best basis for programming morality in AI". I believe them because I tested Eliezer on stopping rules and MWI and 20 other things, and came away convinced each time. My confidence of 98% in MWI is an upper bound on my confidence in whatever Eliezer tells me about anything. If I became convinced that collapse is correct or that stopping rules affect Bayesian inference I'll email Nate Soares at once and ask for a refund on my MIRI donation.

By the way, this would be a good time to stop reading and give MIRI some money. That’s OK, I'll wait.


The good book

A man comes to you with a book and says that it will grant you the only wisdom worth having, and as a side effect it may save your eternal soul. You read the book cover to cover and decide that the ideas you thought are nice are probably true, the ones that you didn't aren't, and you really like the bit with horses. Everyone on LW makes fun of you for claiming to take seriously something you don’t. Y’all see where this is going, don't you? Yes, it's fun to read the sequences for the "insight porn". It's also fun to read the Old Testament for the porn porn. But, maybe it could be more? Wouldn't it be kinda cool if you could read a book and become an epistemic superman, showing up experts wrong in their own domains and being proven right? Or maybe some important questions are going to come up in your life and you'll need to know the actual true answers? Or at least some questions you can bet $20 on with your friends and win?

Don't you want to know if this thing even works?

 

To be continued

Part 2 is here. In it: whining is ceased, arguments are argued about, motivations are explained, love is found, and points are taken.


Base your self-esteem on your rationality

-4 ThePrussian 22 July 2015 08:54AM

Some time ago, I wrote a piece called "How to argue LIKE STALIN - and why you shouldn't".  It was a comment on the tendency, which is very widespread online, to judge an argument not by its merits, but by the motive of the arguer.  And since it's hard to determine someone else's motive (especially on the internet), this decays into working out what the worst possible motive could be, assigning it to your opponent, and then writing him off as a whole.

Via Cracked, here's an example of such arguing from Conservapedia:

"A liberal is someone who rejects logical and biblical standards, often for self-centered reasons. There are no coherent liberal standards; often a liberal is merely someone who craves attention, and who uses many words to say nothing."

And speaking as a loud & proud rightist myself, there is more than a little truth in the joke that a racist is a conservative winning an argument.

I've been puzzling over this for a few years now, and trying to work out what lies underneath it.  What always struck me was the heat and venom with this kind of argument gets made.  One thing has to be granted - the people who Argue Like Stalin are not hypocrites; this isn't an act.  They clearly do believe that their opponents are morally tainted. 

And that's what's weird.  Look around online, and you'll find a lot of articles on the late Christopher Hitchens, asking why he supported the second Iraq war and the removal of Saddam Hussain.  Everything is proposed, from drink addling his brain, to selling out, to being a willful contrarian - everything except the obvious answer: Hitchens was a friend to Kurdish and Iraqi socialists, saw them as the radical and revolutionary force in that part of the world, and wanted to see the Saddam Hussain regime overthrown, even if it took  George Bush to do that.  No wishing to revist the arguments for and against the removal of Saddam Hussain, but what was striking is this utter unwillingness to grant the assumption of innocence or virtue.

  I think that it rests on a simple, and slightly childish, error.  The error goes like this: "Only bad people believe bad things, and only good people believe good things."

But even a basic study of history can find plenty of examples of good - or, anyway, ordinary - chaps supporting the most apallingly evil ideas and actions.  Most Communists and Nazis were good people, with reasonable motives.  Their virtue didn't change anything about the systems that they supported. 

Flipping it around, being fundamentally a lousy person, or lousy in parts of your life, doesn't proclude you from doing good.  H.L. Mencken opposed lynching in print, repeatedly, and at no small risk to himself.  He called for the United States to accept all jewish refugees fleeing the Third Reich when even American jewry (let alone FDR) was lukewarm at best on the subject.  He was on excellent terms with many black intellectuals such as W.E.B DuBois, and was praised by the Washington Bureau Director of the NAACP as a defender of the black man.  He also maintained an explicitly racist private diary.

Selah.

The error that I mentioned leads to Arguing Like Stalin in the following way: someone looks within himself, sees that he isn't really a bad person, and concludes that no cause he can endorse can be wicked.  He might be mistaken in his beliefs, but not evil.  And from that it is a really short step to conclude that people who disagree must be essentially wicked - because if they were virtuous, they would hold the views that the self-identified virtuous do.

The heat and venome becomes inevitable when you base your self-esteem on a certain characteristic or mode of being ("I am tolerant", "I am anti-racist" etc.)  This reinforces the error and puts you in an intellectual cul de sac - it makes it next to impossible to change your mind, because to admit that you are on the wrong side is to admit that you are morally corrupt, since only bad people support bad things or hold bad views. Or you'd have to conclude that just being a good person doesn't put you always on the right, even in big issues, and that sudden uncertainty can be just as bad.  Try thinking to yourself that you - you as you are now - might have supported the Nazis, or slavery, or anything similar, just by plain old error.

Self-esteem is hugely important.  We all need to feel like we are worth keeping alive.  So it's unsurprising that people will go to huge lengths to defend their base of self-esteem.  But investing it in internal purity is investing it in an intellectual junk-bond.

Emphasizing your internal purity might bring a certain feeling of faux-confidence, but it's meaningless ultimately.  Could the good nature of a Nazi or Communist save one life murdered by those systems?  Conversely, who care what Mencken wrote in his diary or kept in his heart, when he was out trying to stop lynching and save Jewish refugees?  No one cares about your internal purity, ultimately not even you - which is why you see such puritanical navel-gazing you see around a lot.  People trying to insist that they are perfect and pure on the inside, in a slightly too emphatic way that suggests they aren't that sure of.

After turning this over and over in my mind, the only way I can see out of this is to base your self-esteem primarily on your willingness to be rational.  Rather than insisting that you are worthy because of characteristic X, try thinking of yourself as worthy because you are as rational as can be, checking your facts, steelmanning arguments and so on.

This does bring with it the aforementioned uncertainty, but it also brings a relief.  The relief that you don't need to worry that you aren't 100% pure in some abstract way, that you can still do the decent and the right thing. You don't have to worry about failing some ludicrous ethereal standard, you can just get on with it.

It also means you might change some minds - bellow at someone that he's an awful person for holding racist views will get you nowhere.  Telling him that it's fine if he's a racist as long as he's prepared to do right and treat people of all races justly, just might.

 


Why you should attend EA Global and (some) other conferences

19 Habryka 16 July 2015 04:50AM

Many of you know about Effective Altruism and the associated community. It has a very significant overlap with LessWrong, and has been significantly influenced by the culture and ambitions of the community here.

One of the most important things happening in EA over the next few months is going to be EA Global, the so far biggest EA and Rationality community event to date, happening throughout the month of August in three different locations: OxfordMelbourne and San Francisco (which is unfortunately already filled, despite us choosing the largest venue that Google had to offer).

The purpose of this post is to make a case for why it is a good idea for people to attend the event, and to serve as an information hub for information that might be more relevant to the LessWrong community (as well an additional place to ask questions). I am one of the main organizers and very happy to answer any questions that you have. 

Is it a good idea to attend EA Global?

This is a difficult question, that obviously will not have a unique answer, but from the best of what I can tell, and for the majority of people reading this post, the answer seems to be "yes". The EA community has been quite successful at shaping the world to the better, and at building an epistemic community that seems to be effective at changing its mind and updating on evidence.

But there have been other people arguing in favor of supporting the EA movement, and I don't want to repeat everything that they said. Instead I want to focus on a more specific argument: "Given that I belief that EA is overall a promising movement, should I attend EA Global if I want to improve the world (according to my preferences)?"

The key question here is: Does attending the conference help the EA Movement succeed?

How attending EA Global helps the EA Movement succeed

It seems that the success of organizations is highly dependent on the interconnectedness of its members. In general a rule seems to hold: The better connected the social graph of your organization is, the more effective does it work.

In particular, any significant divide in an organization, any clustering of different groups that do not communicate much with each other, seems to significantly reduce the output the organization produces. I wish we had better studies on this, and that I could link to more sources for this, but everything I've found so far points in this direction. The fact that HR departments are willing to spend extremely large sums of money to encourage the employees of organizations to interact socially with each other, is definitely evidence for this being a good rule to follow (though far from conclusive). 

What holds for most organizations should also hold for EA. If this is true, then the success of the EA Movement is significantly dependent on the interconnectedness of its members, both in the volume of its output and the quality of its output.

But EA is not a corporation, and EA does not share a large office together. If you would graph out the social graph of EA, it would very much look clustered. The Bay Area cluster, the Oxford cluster, the Rationality cluster, the East Coast and the West Coast cluster, many small clusters all over Europe with meetups and small social groups in different countries that have never talked to each other. EA is splintered into many groups, and if EA would be a company, the HR department would be very justified in spending a very significant chunk of resources at connecting those clusters as much as possible. 

There are not many opportunities for us to increase the density of the EA social graph. There are other minor conferences, and online interactions do some part of the job, but the past EA summits where the main events at which people from different clusters of EA met each other for the first time. There they built lasting social connections, and actually caused these separate clusters in EA to be connected. This had a massive positive effect on the output of EA. 

Examples: 

 

  • Ben Kuhn put me into contact with Ajeya Cotra, resulting in the two of us running a whole undergraduate class on Effective Altruism, that included Giving Games to various EA charities that was funded with over $10.000. (You can find documentation of the class here).
  • The last EA summit resulted in both Tyler Alterman and Kerry Vaughan being hired by CEA and now being full time employees, who are significantly involved in helping CEA set up a branch in the US.
  • The summit and retreat last year caused significant collaboration between CFAR, Leverage, CEA and FHI, resulting in multiple situations of these organizations helping each other in coordinating their fundraising attempts, hiring processes and navigating logistical difficulties.   

 

This is going to be even more true this year. If we want EA to succeed and continue shaping the world towards the good, we want to have as many people come to the EA Global events as possible, and ideally from as many separate groups as possible. This means that you, especially if you feel somewhat disconnected from EA, seriously want to consider coming. I estimate the benefit of this to be much bigger than the cost of a plane ticket and the entrance ticket (~$500). If you do find yourself significantly constrained by financial resources, consider applying for financial aid, and we will very likely be able to arrange something for you. By coming, you provide a service to the EA community at large. 

How do I attend EA Global? 

As I said above, we are organizing three different events in three different locations: Oxford, Melbourne and San Francisco. We are particularly lacking representation from many different groups in mainland Europe, and it would be great if they could make it to Oxford. Oxford also has the most open spots and is going to be much bigger than the Melbourne event (300 vs. 100).  

If you want to apply for Oxford go to: eaglobal.org/oxford

If you want to apply for Melbourne go to: eaglobal.org/melbourne

If you require financial aid, you will be able to put in a request after we've sent you an invitation. 

You are (mostly) a simulation.

-4 Eitan_Zohar 18 July 2015 04:40PM

This post was completely rewritten on July 17th, 2015, 6:10 AM. Comments before that are not necessarily relevant.

Assume that our minds really do work the way Unification tells us: what we are experiencing is actually the sum total of every possible universe which produces them. Some universes have more 'measure' than others, and that is typically the stable ones; we do not experience chaos. I think this makes a great deal of sense- if our minds really are patterns of information I do not see why a physical world should have a monopoly on it.

Now to prove that we live in a Big World. The logic is simple- why would something finite exist? If we're going to reason that some fundamental law causes everything to exist, I don't see why that law restricts itself to this universe and nothing else. Why would it stop? It is, arguably, simply the nature of things for an infinite multiverse to exist.

I'm pretty terrible at math, so please try to forgive me if this sounds wrong. Take the 'density' of physical universes where you exist- the measure, if you will- and call it j. Then take the measure of universes where you are simulated and call it p. So, the question become is j greater than p? You might be thinking yes, but remember that it doesn't only have to be one simulation per universe. According to our Big World model there is a universe out there in which all processing power (or a significant portion) as been turned into simulations of you.

So we take the amount of minds being simulated per universe and call that x. Then the real question becomes if j > px. What sort of universe is common enough and contains enough minds to overcome j? If you say that approximately 10^60 simulated human minds could fit in it (a reasonable guess for this universe) but that such universes are five trillion times rarer than the universe we live in, than it's clear that our own 'physical' measure is hopelessly lower than our simulated measure.

Should we worry about this? It would seem highly probable that in most universes where I am being simulated I once existed in, or humans did, since the odds of randomly stumbling upon me in Mind Space seem unlikely enough to ignore. Presumably they are either AIs gone wrong or someone trying to grab some of my measure, for whatever reason.

As way of protecting measure, pretty much all of our postsingularity universes would divide up the matter of the universe for each person living, create as many simulations as possible of them from birth, and allow them to go through the Singularity. I expect that my ultimate form is a single me, not knowing if he is simulated or not, with billions of perfect simulations of himself across our universe, all reasoning the same way (he would be told this by the AI, since there isn't any more reason for secrecy). This, I think, would be able to guard my measure against nefarious or bizarre universes in which I am simulated. It cannot just simulate the last few moments of my life because those other universes might try to grab younger versions of me. So if we take j to be safe measure rather than physical measure, and p to be unsafe or alien, it becomes jx > px, which I think is quite reasonable.

I do not think of this as some kind of solipsist nightmare; the whole point of this is to simulate the 'real' you, the one that really existed, and part of your measure is, after all, always interacting in a real universe. I would suggest that by any philosophical standard the simulations could be ignored, with the value of your life being the same as ever.

I need a protocol for dangerous or disconcerting ideas.

3 Eitan_Zohar 12 July 2015 01:58AM

I have a talent for reasoning my way into terrifying and harmful conclusions. The first was modal realism as a fourteen-year-old. Of course I did not understand most of its consequences, but I disliked the fact that existence was infinite. It mildly depressed me for a few days. The next mistake was opening the door to solipsism and Brain-in-a-Vat arguments. This was so traumatic to me that I spent years in a manic depression. I could have been healed in a matter of minutes if I had talked to the right person or read the right arguments during that period, but I didn't.

Lesswrong has been a breeding ground of existential crisis for me. The Doomsday argument (which I thought up independently), ideas based on acausal trade (one example was already well known; one I invented myself), quantum immortality, the simulation argument, and finally my latest and worst epiphany: the potential horrible consequences of losing awareness of your reality under Dust Theory. I don't know that that's an accurate term for the problem, but it's the best I can think of.

This isn't to say that my problems were never solved; I often worked through them myself, always by refuting the horrible consequences of them to my own satisfaction and never through any sort of 'acceptance.' I don't think that my reactions are a consequence of an already depressed mind-state (which I certainly have anyway) because the moment I refute them I feel emotionally as if it never happened. It no longer wears on me. I have OCD, but if it's what's causing me to ruminate than I think I prefer having it as opposed to irrational suppression of a rational problem. Finding solutions would have taken much longer if I hadn't been thinking about them constantly.

I've come to realize that this site, due to perhaps a confluence of problems, was extremely unhelpful in working through any of my issues, even when they were brought about of Lesswrong ideas and premises. My acausal problem [1] I sent to about five or six people, and none of them had anything conclusive to say but simply referred me to Eliezer. Who didn't respond, even though this sort of thing is apparently important to him. This whole reaction struck me as disproportionate to the severity of the problem, but that was the best response I've had so far.

The next big failure was my resolution to the Doomsday argument. [2] I'm not very good yet at conveying these kind of ideas, so I'm not sure it was entirely the fault of the Lesswrongers, but still. One of them of them insisted that I needed to explain how 'causality' could be violated; isn't that the whole point of acausal systems? My logic was sound, but he substituted abstractly intuitive concepts in place of them. I would think that there would be something in the Sequences about that.

The other posters were only marginally more helpful. Some of them challenged the self-sampling assumption, but then why even bother if the problem I'm trying to solve requires it to be true? In the end, not one person even seemed to consider the possibility that it might work. Even though it is a natural extrapolation from other ideas which are taken very very seriously by Lesswrong. Instead of discussing my resolution, they discussed the DA itself, or AI, or whatever they found more interesting.

Finally, we come to an absolutely terrifying idea I had a few days ago, which I naively assumed would catch the attention of any rational person. An extrapolation of Dust Theory [3] implied that you might die upon going to sleep, not immediately, but through degeneration, and that the person who wakes up in the morning is simply a different observer, who has an estimated lifespan of however long he remains awake. Rationally anyone should therefore sign up for cryonics and then kill themselves, forcing their measure to continue into post-Singularity worlds that no longer require him to sleep (not that I would have ever found the courage to do this). [4] In the moments when I considered it most plausible I gave it no more than a 10% chance of being true (although it would have been higher if I had taken Dust Theory for granted), and it still traumatized me in a way I've never experienced before. Always during my worst moments sleep came as a relief and escape. Now I cannot go to sleep. Only slightly less traumatizing was the idea that during sleep my mind declines enough to merge into other experiences and I awake into a world I would consider alien, with perfectly consistent memories.

My inquiries on different threads were almost completely ignored, so I eventually created my own. After twenty-four hours there were nine posts, and now there are twenty-two. All of them either completely miss the point (always not realizing this) or show complete ignorance about what Dust Theory is. The idea that this requires any level of urgency does not seem to have occurred to anyone. Finally, the second part of my question, which asked about the six-year-old post "getting over Dust Theory" was completely ignored, despite having ninety-five comments on it by people who seem to already understand it themselves.

I resolved both issues, but not to my own satisfaction: while I now consider the death outcome unlikely enough to dismiss, the reality-jumping still somewhat worries me. I now will not be able to go to sleep without fear for the next few months; maybe longer, and my mental and physical health will deteriorate. Professional help or a hotline is out of the question because I will not inflict these ideas on people who are not equipped to deal with them, and also because I regard psychologists as charlatans or, at best, practitioners of a deeply unhealthy field. The only option I have to resolve the issues is talking to someone who can discuss it rationally.

This post [5] by Eliezer, however unreliable he might be, convinced me that he might actually know what he is talking about (though I still don't know how Max Tegmark's rebuttal to quantum immortality is refuted, because it seems pretty airtight to me). More disappointing is Nick Bostrom's argument that mind-duplicates will experience two subjective experiences; he does not understand the idea of measure, i.e. that we exist in all universes that account for our experiences, but more in some than others. Still, I think there has to be someone out there who is capable of following my reasoning- all the more frustrating, because the more people misapprehend my ideas, the clearer and sharper they seem to me.

Who do I talk to? How do I contact them? I doubt that going around emailing these people will be effective, but something has to change. I can't go insane, as much as that would be a relief, and I can't simply ignore it. I need someone sane to talk to, and this isn't the place to find that.

Sorry if any of this comes off as ranting or incoherent. That's what happens when someone is pushed to all extremes and beyond. I am not planning on killing myself whatsoever and do not expect that to change. I just want help.

[1] http://lesswrong.com/lw/l0y/i_may_have_just_had_a_dangerous_thought/ (I don't think that the idea is threatening anymore, though.)

[2] http://lesswrong.com/lw/m8j/a_resolution_to_the_doomsday_argument/

[3] http://sciencefiction.com/2011/05/23/science-feature-dust-theory/

[4] http://lesswrong.com/lw/mgd/the_consequences_of_dust_theory/

[5] http://lesswrong.com/lw/few/if_mwi_is_correct_should_we_expect_to_experience/7sx3

(The insert-link button is greyed out, for whatever reason.)

A Roadmap: How to Survive the End of the Universe

5 turchin 02 July 2015 11:01AM

In a sense, this plan needs to be perceived with irony because it is almost irrelevant: we have very small chances of surviving even next 1000 years and if we do, we have a lot of things to do before it becomes reality. And even afterwards, our successors will have completely different plans.

There is one important exception: there are suggestions that collider experiments may lead to a vacuum phase transition, which begins at one point and spreads across the visible universe. Then we can destroy ourselves and our universe in this century, but it would happen so quickly that we will not have time to notice it. (The term "universe" hereafter refers to the observable universe that is the three-dimensional world around us, resulting from the Big Bang.)

We can also solve this problem in next century if we create superintelligence.

The purpose of this plan is to show that actual immortality is possible: that we have an opportunity to live not just billions and trillions of years, but an unlimited duration. My hope is that the plan will encourage us to invest more in life extension and prevention of global catastrophic risks. Our life could be eternal and thus have meaning forever.

Anyway, the end of the observable universe is not an absolute end: it's just one more problem on which the future human race will be able to work. And even at the negligible level of knowledge about the universe that we have today, we are still able to offer more than 50 ideas on how to prevent its end.

In fact, to assemble and come up with these 50 ideas I spent about 200 working hours, and if I had spent more time on it, I'm sure I would have found many new ideas.  In the distant future we can find more ideas; choose the best of them; prove them, and prepare for their implementation.

First of all, we need to understand exactly what kind end to the universe we should expect in the natural course of things. There are many hypotheses on this subject, which can be divided into two large groups:

1. The universe is expected to have a relatively quick and abrupt end, known as the Big Crunch or Big Rip (accelerating expansion of the universe causes it to break apart), or the decay of the false vacuum. Vacuum decay can occur at any time; a Big Rip could happen in about 10-30 billion years, and the Big Crunch has hundreds of billions of years timescale.

2. Another scenario assumes an infinitely long existence of an empty, flat and cold universe which would experience so called "heat death" that is gradual halting of all processes and then disappearance of all matter.

The choice between these scenarios depends on the geometry of the universe, which is determined by the equations of general relativity and, – above all – the behavior of the almost unknown parameter: dark energy.

The recent discovery of dark energy has made Big Rip the most likely scenario, but it is clear that the picture of the end of the universe will change several times.

You can find more at: http://en.wikipedia.org/wiki/Ultimate_fate_of_the_universe

There are five general approaches to solve the end of the universe problem, each of them includes many subtypes shown in the map:

1.     Surf the Wave: Utilize the nature of the process which is ending the universe. (The most known of these type of solutions is Omega Point by Tippler, where the universe's energy collapse is used to make infinite calculations.)

2.     Go to parallel world

3.     Prevent the end of the universe

4.     Survive the end of the universe

5.     Dissolving the problem

 Some of the ideas are on the level of the wildest possible speculations and I hope you will enjoy them.

The new feature of this map is that in many cases mentioned, ideas are linked to corresponding wiki pages in the pdf. 

Download the pdf of the map here: http://immortality-roadmap.com/unideatheng.pdf

 

 

Effective Altruism vs Missionaries? Advice Requested from a Newly-Built Crowdfunding Platform.

3 lululu 30 June 2015 05:39PM

Hi, I'm developing a next-generation crowdfunding platform for non-profit fundraising. From what we have seen, it is aeffective tool, more about it below. I'm working with two other cofounders, both of whom are evangelical Christians. We get along well in general, but that I strongly believe in effective altruism and they do not.

We will launch a second pilot fundraising campaign in 2-3 weeks. My co-founders have arranged for us fund raise for is a "church planting" missionary organization. This is so opposed my belief in effective altruism I feel uncomfortable using our effective tool to funnel donors' dollars in THIS of all directions. This is not the reason I got involved in this project.

My argument with them is that we should charge more to ineffective nonprofits such as colleges, religious, or political organizations, and use that extra to subsidize the campaign and money-processing costs of the effective non-profits. I think this is logically consistent with earning to give. But I am being outvoted two-to-one by people who believe saving lives and saving souls are nearly equally important.

So I have two requests:

1. If anyone has advise on how to navigate this (including any especially well written articles that would appeal to evangelical Christians, or experience negotiating with start-up cofounders). 

2. If anyone has personal connections with effective or effective-ish non-profits, I would much prefer to fundraise for them than my co-founder's church connections. Caveat: the org must have US non-profit legal status. 

About the platform: the gist our concept is that we're using a lot of psychology and biases and altruism research to nudge more people towards giving and also nudge them towards a sustained involvement with the nonprofit in question. We're using some of the tricks that made the ice bucket challenge so successful (but with added accountability to ensure that visible involvement actually leads to monetary donations). Users can pledge money contingent on their friend's involvement, which motivates people in the same way that matching donations motivate people. Giving is very visible, and people are more likely to give if they see friends giving. Friends are making the request for funding, which creates a sense of personal connection. Each person's mini-campaign has an involvement goal and a time limit (3 friends in 3 days) to create a sense of urgency. The money your friends donate visibly increases your impact so it also feel like getting money from nothing - a $20 pledge can become hundreds of dollars. We nudge people towards automated smaller monthly reoccurring gifts. We try to minimize the number of barriers to making a donation (less steps, fewer fields).  

 

Selecting vs. grooming

5 DeVliegendeHollander 30 June 2015 10:48AM

Content warning: meta-political, with hopefully low mind-killer factor.

Epistemic status: proposal for brain-storming.

- Representative democracies select political leaders. Monarchies and aristocracies groom political leaders for the job from childhood. (Also, to a certain extent they breed them for the job.)

- Capitalistic competition selects economic elites. Heritable landowning aristocracies groom economic elites from childhood. (Again, they also breed them.)

- A capitalist employer selects an accountant from a pool of 100 applicants. A feudal lord would groom a serf boy who has a knack for horses into the job of the adult stable man.

It seems a lot like selecting is better than grooming. After it is the modern way and hardly anyone would argue capitalism doesn't have a higher economic output than feudalism and so on. 

But... since it was such a hugely important difference through history, perhaps, it was one of the things that really defined the modern world because it determines the whole social structure of societies past and present, that I think it should deserve some investigation. There may be something more interesting lurking here than just saying selection/testing won over grooming, period.

1) Can aspects of grooming as opposed to selecting/testing be steelmanned, are there corner cases when it could be better?

2) A pre-modern, medievalish society that nevertheless used a lot of selection/testing was China - I am thinking about the famous mandarin exams. Does this seem to have had any positive effect on China compared to other similar societies? I.e. is this even like that it is a big factor in the general outcomes of 2015 West vs. 1515 West? Comparing old China with similar medievalish but not selectionist (but inheritance based) societies would be useful for isolating this factor, right?

3) Why exactly does selecting and testing work better than grooming (and breeding) ?

4) Is it possible it works better because people do the breeding (intelligent people tend to marry intelligent people etc.) and grooming (a child of doctors will have an entirely different upbringing than a child of manual laborers) on their own, thus the social system does not have to do it, it is enough / better for the social system to do the selection, to do the testing of the success of the at-home grooming?

5) Any other interesting insight or reference?

Note: this is NOT about meritocracy vs. aristocracy. It is about two different kinds of meritocracy - where you either select, test people for merit (through market competition or elections) but you don't care much how to _build_ people who  will have merit vs. an aristocratic meritocracy where you largely focus on breeding and grooming people into the kinds who will have merit, and don't focus on selecting and testing so much.

Note 2: is this even possible this is a false dichotomy? One could argue that Western society is chock full of features for breeding and grooming people, there are dating sites for specific groups of people, there are tons of helping resources parents can draw on, kids spend 15-20 years at school and so on, so the breeding and grooming is done all right, I am just being misled here by mere names. Such as the name democracy: it is a selection process, but who wins depends on breeding and grooming. Such as market competition: those best bred and groomed have the highest chance. Is it simply so that selection is more noticable than grooming, it gets more limelight, but we actually do both? If yes, why does selection get more limelight than grooming? Why do we talk about elections more than about how to groom a child into being a politician, or why do we talk about market competition more than how to groom a child into the entrepreneur who aces competition? If modern society uses both, why is selection in the public spotlight while grooming just being something happening at home and school and not so noticeable? (To be fair, on LW, we talk more about how to test hypotheses than how to formulate them. Is this potentially related? People are just more interested in testing than building, be that hypotheses or people?)

 

 

Is this evidence for the Simulation hypothesis?

1 Eitan_Zohar 28 June 2015 11:45PM

I haven't come across this particular argument before, so I hope I'm not just rehashing a well-known problem.

"The universe displays some very strong signs that it is a simulation.

As has been mentioned in some other answers, one way to efficiently achieve a high fidelity simulation is to design it in such a way that you only need to compute as much detail as is needed. If someone takes a cursory glance at something you should only compute its rough details and only when someone looks at it closely, with a microscope say, do you need to fill in the details.

This puts a big constraint on the kind of physics you can have in a simulation. You need this property: suppose some physical system starts in state x. The system evolves over time to a new state y which is now observed to accuracy ε. As the simulation only needs to display the system to accuracy ε the implementor doesn't want to have to compute x to arbitrary precision. They'd like only have to compute x to some limited degree of accuracy. In other words, demanding y to some limited degree of accuracy should only require computing x to a limited degree of accuracy.

Let's spell this out. Write y as a function of x, y = f(x). We want that for all ε there is a δ such that for all x-δ<y<x+δ, |f(y)-f(x)|<ε. This is just a restatement in mathematical notation of what I said in English. But do you recognise it?

It's the standard textbook definition of a Continuous function. We humans invented the notion of continuity because it was an ubiquitous property of functions in the physical world. But it's precisely the property you need to implement a simulation with demand-driven level of detail. All of our fundamental physics is based on equations that evolve continuously over time and so are optimised for demand-driven implementation.

One way of looking at this is that if y=f(x), then if you want to compute n digits of y you only need a finite number of digits of x. This has another amazing advantage: if you only ever display things to a given accuracy you only ever need to compute your real numbers to a finite accuracy. Nature could have chosen to use any number of arbitrarily complicated functions on the reals. But in fact we only find functions with the special property that they need only be computed to finite precision. This is precisely what a smart programmer would have implemented.

(This also helps motivate the use of real numbers. The basic operations on real numbers such as addition and multiplication are continuous and require only finite precision in their arguments to compute their values to finite precision. So real numbers give a really neat way to allow inhabitants to find ever more detail within a simulation without putting an undue burden on its implementation.)

But you can do one step further. As Gregory Benford says in Timescape: "nature seemed to like equations stated in covariant differential forms". Our fundamental physical quantities aren't just continuous, they're differentiable. Differentiability means that if y=f(x) then once you zoom in closely enough, y depends linearly on x. This means that one more digit of y requires precisely one more digit of x. In other words our hypothetical programmer has arranged things so that after some initial finite length segment they can know in advance exactly how much data they are going to need.

After all that, I don't see how we can know we're not in a simulation. Nature seems cleverly designed to make a demand-driven simulation of it as efficient as possible."

http://www.quora.com/How-do-we-know-that-were-not-living-in-a-computer-simulation/answer/Dan-Piponi

The great quote of rationality a la Socrates (or Plato, or Aristotle)

1 Bound_up 23 June 2015 03:55PM

Help a brother out?

 

There's a great quote by one of The Big 3 Greek Philosophers (EDIT: Reference to Cicero removed) which I can paraphrase by memory as:

 

"I consider it rather better for myself to be proven wrong than to prove someone else wrong, just as I'm better off being cured of a disease than curing someone of one."

 

I can't find the quote, or from which of the Three it is.

 

Anybody know? Or know where to look? I've already tried varying google search techniques and perused the Wikiquotes article on each of them.

View more: Next