Creating an Optimal Future

-2 [deleted] 18 October 2013 01:39PM

Creating an Optimal Future. It sounds very arrogant when I type it out. A more reasonable claim would be that it is possible to create a Less Wrong Future, but for reasons that will shortly become apparent that felt like stepping too hard on other people’s shoes. I suppose Working Towards an Optimal Future would be the best title for what I have in mind.

Let me backtrack and start at the beginning. I am not a rationalist. Well, I am not a rationalist as the term applies in this community. Not completely anyway. I have only read some of the Sequences and, although I’ve devoured HPMOR, I do understand and agree with a number of the criticisms that have been leveled toward it.

But I am here because of that Optimal Future I have mentioned. The way I see it, we are not currently on a trajectory that will lead to an optimal future and I am fairly confident that you agree with me on that. From what I have seen and heard from various online communities over the years, quite a few people do agree with me on that.

But the problem is, a few thousand people visit Less Wrong regularly, generating and evolving a unique memescape. And a few miles down the information highway, another few thousand people post to Humanity+ mailing lists; building up a different memescape. There is some overlap, naturally, but not nearly enough. And in another corner of the internet, environmentalist factions sit in their own forums and discuss a different set of problems affecting (trans)humanity’s future. In yet another corner, socialists imagine utopias built on free access to nanofabricators (while anarchists imagine a similar utopia sans the government).

All in all, there may be near to a million people looking at future problems and solutions. But as long as they do so in small fringe groups, the solutions they can think up are limited. Worse, “junk” memes start sweeping into the community, harming recruitment and giving the underlying philosophies a bad name. To push the metaphor about as far as it can go: these communities tend to get a bit inbred over time.

And a million voices fail to affect policies in any way, because for all the hopes and fears they share they fail to coordinate and collaborate. Meanwhile, the world continues to move along a sub-optimal trajectory.

Which, finally, leads us back to Optimal Future. In discussing the problems above with friends, we hit upon an obvious solution: build a place where all futurists and people who care about the future (but do not self identify as futurist) can discuss the relevant topics and hopefully find novel solutions through combining memes that one wouldn’t normally think to combine.

Which is why I am here now. The site has been built, but then that was always going to be the easiest part. The hard part is building a diverse and active community. That’s where you come in. LessWrong is one of the most active future thinking communities on the web, and also a fairly controversial one. Having you as part of the community could make a lot of difference to us. In exchange we can offer you a wider audience and some new perspectives.

So if you are curious as to how a Friendly AGI designed by anarchist would be different to one designed by Greens, feel like scaring communists with what horrors a corporate paperclip maximizer could commit, want to see how wide the spectrum of transhumanists really is, want to learn about cryptography or sousveillance, or feel like debating the pros and cons of open sourced AIs, come on down to optimalfuture.org and take a look at the bigger picture.

http://optimalfuture.org/

How to Beat Procrastination (to some degree) (if you're identical to me)

11 [deleted] 16 October 2013 11:17PM

So, you procrastinate. A lot. And it's a really big problem in your life, and so you really want to, y'know, stop procrastinating. But for some reason, it's just not that easy.

So, why do you procrastinate?

It's probably a really hard question to answer. Is it "because I'm lazy"? That's not a useful answer, because "stop being lazy" is just as hard to do as "stop procrastinating". How about "because of akrasia"? That's not a real answer; that's just a restatement of the question in more vague terms. Maybe "because of hyperbolic discounting"? Even if that's true, that's not really a useful answer, because there's no way to turn hyperbolic discounting off. (Or is there? If you know of a way to turn hyperbolic discounting off, please tell us.)

Maybe your procrastination has three parts. First, once you start doing something interesting, it's very hard for you to stop; second, having stopped, you don't usually feel like starting to do something useful; and third, having started, you often find yourself losing focus and wanting to do something else.

So let's look at each of these parts in turn.

Once you start doing something interesting, it's hard for you to stop.

For you, this one's a pain in the ass. (I know this because for me, it's a pain in the ass, and since you're reading this article, you must be identical to me.) You've told yourself that once you find yourself doing something interesting, you're just going to stop immediately. But that doesn't work at all. You've tried setting a timer, and telling yourself that you'll definitely, absolutely stop when the timer goes off. But that doesn't work, either; you just ignore the timer. What if you set a timer to repeatedly and annoyingly beep at you until you tell it that you've started working? You repeatedly ignore the timer and quickly become annoyed.

For you, once this problem has started, there just doesn't seem to be a way to stop it. So the solution is to just not start in the first place. The ideal situation is that you're not doing any interesting and fun activities whatsoever until you're done working for the day (unless, of course, one of those activities is part of the work you're supposed to get done).

You should still take breaks, of course; don't expect to work for four hours solid without stopping. Just don't do anything interesting during your breaks. Listen to music, or stare out the window or something.

And, of course, this raises the question: how do I avoid doing these interesting activities? It turns out that, compared to the rest of your procrastination, this one is really easy to deal with. Hopping on Facebook or whatever when you're not sure what to do is a breakable habit. So break it. And how do you do that?

One technique is to neuter the worst culprits. Go into your computer's configuration and tell it that reddit doesn't exist. Then if you accidentally try to access reddit, you'll just get an error message. Stop making status updates on Facebook, don't accept friend requests, and block everyone from showing up in your news feed. Disable your IRC bouncer and only access IRC through the server's crappy web interface. Avoiding temptation is easier when you set yourself to be disappointed every time.

Still, there's a bit of residual temptation left over. How do you avoid this? Just use plain force of will. Tell yourself, "I need to avoid doing this right now." That ought to work. Hopefully.

So now that you've got that fixed (kind of), you've got another problem on your hands.

You're not currently doing anything addictive, but you just don't feel like working, either.

The easy answer to this question: just do it anyway. You may feel kinda crappy, but this doesn't actually have any negative effects.

Or maybe you really, really don't feel like working. All right. Why not? Is it because there's a fuckton of stuff you have to do, and getting it all done is going to suck royally? Well, you can only do one thing at a time, so figure out what the one thing you should do next is, and completely ignore every obligation except for that one. (Figuring out which task is the one you should do next should be easy. If it's not easy, make a to-do list and use it properly.) If that little piece still seems too arduous, figure out the next little piece of that little piece that you need to do, and ignore the rest of it for the time being. Repeat.

All right, so now you're working (hopefully). But it's not going very well.

You're working, but you're not focused on your work at all; you're just thinking about other unrelated stuff, and about how much you'd like to do something other than working.

Part of the problem here is that you have ADD. (Since you're reading this article, I'm assuming you're identical to me, and so you have every disorder I have.) Consider medication and talk to your psychiatrist. Therapy's probably a good idea, too, and it's easier to get seen by a psychologist or therapist than by a psychiatrist.

Remember to eliminate distractions, too. Close unnecessary browser tabs and applications. Set your IM status to "do not disturb". Maybe try writing some of your thoughts down.

And once you've done that... hell, I have no idea. Good luck.

Bookmarklet to Hide Nested Comments

4 witzvo 16 October 2013 05:55AM

When reading comments on Less Wrong, I sometimes find myself reading reply after reply deep into a discussion when I really shouldn't be. If I had stopped and thought, I would have said, let's move on to the next thread. Similarly, I've seen comments that pose an interesting question or raise an interesting point get derailed because the first response to that post nitpicks on some issue and reply after reply delve into that. By the time I get to another child of the original comment that addresses its main point, I've exhausted myself.

This is nobody's fault but my own: the nesting mechanism is sound, the default behavior is reasonable, and the show/hide features are right there, but I don't find myself using them as much as I should.

As an experiment to see if I can improve my behavior I created this bookmarklet: Hide Nested Comments

EDIT: The link isn't working right now. It should link to: javascript:var%20cl=$$('div.comment');for(var%20i=0;i<cl.length;i++){a=cl[i];if(a.parentNode.parentNode.id!='comments')hidecomment(a.id.replace(/^[^_]*_/,''),a)};void(0);

but it doesn't. Sorry. Presumably I'm in violation of some security policy by attempting to make a bookmarklet in a post. If you trust me, you can create a bookmark with that as the destination yourself, but the directions below about dragging won't work.

What it does is pretty simple: it hides all comments on a post that aren't top level comments. This way, the default is that I don't see all the followups unless I decide that the subject matter is important enough that I want to wade into it. It makes it more effort to dig into subjects that interest me, but (hopefully, at least) I'll get less distracted where I shouldn't and a few more clicks won't kill me.

The biggest drawback I'm aware of right now is that it means that I'll start missing interesting content that's buried under uninteresting content (arguably I'm missing most of those anyway). A fancier version might, for example, label the hidden posts with the maximum buried karma.

Anyway, it's an experiment. Feel free to try it yourself and report back, or tell me why you think it's a terrible idea.

How do I try it?

One way is to turn on your browser's bookmark toolbar and drag the above link onto that toolbar. Click on it on the toolbar to use it. If you don't like the results, just refresh. Another way, in Firefox, is to right-click (control-click on Mac) on the link and choose bookmark this link.

What's the code?

The javascript that makes the bookmarklet work is:

var cl=$$('div.comment'); // a list of all the comment divs on the page
for (var i=0; i<cl.length; i++) {
a=cl[i]; // take each in turn (forgot "var", sigh)
if (a.parentNode.parentNode.id!='comments') hidecomment(a.id.replace(/^[^_]*_/,''),a); // unless it's top-level, hide it
};
void(0);
// (yeah, I could use .each(...), but I didn't; it's just a hack at the moment anyway)

As an upload, would you join the society of full telepaths/empaths?

5 shminux 15 October 2013 08:59PM

I asked this question on IRC before and got some surprising answers.

Suppose, for the sake of argument, you get cryo-preserved and eventually wake up as an upload. Maybe meat->sim transfer ends up being much easier than sim->meat or meat->meat, or something. Further suppose that you are not particularly averse to a digital-only existence, at least not enough to specifically prohibit reviving you if this is the only option. Yet further suppose that sim-you is identical to meat-you for all purposes that meat-you cared about (including all your hidden desires and character faults). Let's also preemptively assume that any other attempts to fight this hypothetical have been satisfactorily resolved, just to get this out of the way.

Now, in the "real world", or at least in the simulation level we are at, there is no evidence that telepathy of any kind exists or is even possible. However, in the sim-world there is no technological reason it cannot be implemented in some way, for just thoughts, or just feelings, or both. There is a lot to be said for having this kind of connection between people (or sims). It gets rid of or marginalizes deception, status games, mis-communication-based biases and fallacies. On the other hand, your privacy disappears completely and so do any advantages over others the meat-you might want to retain in the digital world. And what you perceive as your faults are out there for everyone to see and feel.

As a new upload, you are informed that many "people" decided to get integrated into the telepathic society and appear to be happy about it, with few, if any, defections. There is also the group of those who opted out, and it looks basically like your "normal" mundane human society. There is only a limited and strictly monitored interaction between the two worlds to prevent exploitation/manipulation. 

Would you choose to get fully integrated or stay as human-like as possible? Feel free to suggest any other alternative (suicide, start a partially integrated society, etc.).

P.S. This topic has been rather extensively covered in science fiction, but I could not find a quality online discussion anywhere.

How to Learn from Experts

32 SatvikBeri 04 October 2013 05:02PM

The key difference between experts and beginners is the quality of their abstractions. Masters of a field mentally organize information in a way that's relevant to the tasks at hand. Amateurs may know as many facts and details as experts but group them in haphazard or irrelevant ways.

For example, experienced Bridge players group cards by suit, then number. They place the most importance on the face cards and work down. Bridge amateurs group solely by number and place equal importance on all numbers. Professional firemen group fires by how the fire was started and how fast it’s spreading-features they use to contain the fire. Novices group fires by brightness and color. Both have the same information, but the firemen hone in on the useful details faster.1

Learn abstractions from masters. If you ask a Software Architect which database technology you should use, circumstances will eventually change and you'll need to ask them again and pay them again. But if you ask the Architect to teach you how to choose a database then you can adapt to changing circumstances. Ideally you should emerge with a clear set of rules-something like a flow-chart for that decision. A good example is this article on whether you should use hadoop. Clear criteria let you make a high-quality decision by focusing on the relevant details.

After talking to the expert you can write up the flow-chart or criteria and send it to them to get their opinion. This ensures you understood what the expert was trying to say, and lets you get additional details they might add. Most importantly it gives them something valuable to share with people seeking similar advice, so you're able to add value to their lives as a thank you for their advice. 

Caveats to this method:

  • In some domains there are details only professionals know. Academic research has a secret paper-passing network with ideas known to top researchers 1-2 years before they’re published. So you need to be in constant contact with these experts and hear the details from them. However, this typically only matters if you’re aiming to become a top-class expert yourself. 
  • Experts aren’t always conscious of the abstractions they use. They’ll say one thing and do another. So you should ask them to guide you through a specific situation and ask them several questions about how their decision would change if some conditions are different.
  • You may not have a specific question you want answered-you might want to find “unknown unknowns.” In that case ask the expert for stories-things they did that made a big difference. Then analyze those situations to figure out what criteria they used. 

Crush Your Uncertainty

16 [deleted] 03 October 2013 05:48AM

Bayesian epistemology and decision theory provide a rigorous foundation for dealing with mixed or ambiguous evidence, uncertainty, and risky decisions. You can't always get the epistemic conditions that classical techniques like logic or maximum liklihood require, so this is seriously valuable. However, having internalized this new set of tools, it is easy to fall into the bad habit of failing to avoid situations where it is necessary to use them.

When I first saw the light of an epistemology based on probability theory, I tried to convince my father that the Bayesian answer to problems involving an unknown processes (eg. laplace's rule of succession), was superior to the classical (eg. maximum likelihood) answer. He resisted, with the following argument:

  • The maximum likelihood estimator plus some measure of significance is easier to compute.
  • In the limit of lots of evidence, this agrees with Bayesian methods.
  • When you don't have enough evidence for statistical significance, the correct course of action is to collect more evidence, not to take action based on your current knowledge.

I added conditions (eg. what if there is no more evidence and you have to make a decision now?) until he grudgingly stopped fighting the hypothetical and agreed that the Bayesian framework was superior in some situations (months later, mind you).

I now realize that he was right to fight that hypothetical, and he was right that you should prefer classical max likelihood plus significance in most situations. But of course I had to learn this the hard way.

It is not always, or even often, possible to get overwhelming evidence. Sometimes you only have visibility into one part of a system. Sometimes further tests are expensive, and you need to decide now. Sometimes the decision is clear even without further information. The advanced methods can get you through such situations, so it's critical to know them, but that doesn't mean you can laugh in the face of uncertainty in general.

At work, I used to do a lot of what you might call "cowboy epistemology". I quite enjoyed drawing useful conclusions from minimal evidence and careful probability-literate analysis. Juggling multiple hypotheses and visualizing probability flows between them is just fun. This seems harmless, or even helpful, but it meant I didn't take gathering redundant data seriously enough. I now think you should systematically and completely crush your uncertainty at all opportunities. You should not be satisfied until exactly one hypothesis has non-negligible probability.

Why? If I'm investigating a system, and even though we are not completely clear on what's going on, the current data is enough to suggest a course of action, and value of information calculations say that decision is not likely enough to change to make further investigation worth it, why then should I go and do further investigation to pin down the details?

The first reason is the obvious one; stronger evidence can make up for human mistakes. While a lot can be said for it's power, human brain is not a precise instrument; sometimes you'll feel a little more confident, sometimes a little less. As you gather evidence towards a point where you feel you have enough, that random fluctuation can cause you to stop early. But this only suggests that you should have a small bias towards gathering a bit more evidence.

The second reason is that though you may be able to make the correct immediate decision, going into the future, that residual uncertainty will bite you back eventually. Eventually your habits and heuristics derived from the initial investigation will diverge from what's actually going on. You would not expect this in a perfect reasoner; they would always use their full uncertainty in all calculations, but again, the human brain is a blunt instrument, and likes to simplify things. What was once a nuanced probability distribution like 95% X, 5% Y might slip to just X when you're not quite looking, and then, 5% of the time, something comes back from the grave to haunt you.

The third reason is computational complexity. Inference with very high certainty is easy; it's often just simple direct math or clear intuitive visualizations. With a lot of uncertainty, on the other hand, you need to do your computation once for each of all (or some sample of) probable worlds, or you need to find a shortcut (eg analytic methods), which is only sometimes possible. This is an unavoidable problem for any bounded reasoner.

For example, you simply would not be able to design chips or computer programs if you could not treat transistors as infallible logical gates, and if you really really had to do so, the first thing you would do would be to build an error-correcting base system on top of which you could treat computation as approximately deterministic.

It is possible in small problems to manage uncertainty with advanced methods (eg. Bayes), and this is very much necessary while you decide how to get more certainty, but for unavoidable computational reasons, it is not sustainable in the long term, and must be a temporary condition.

If you take the habit of crushing your uncertainty, your model of situations can be much simpler and you won't have to deal with residual uncertainty from previous related investigations. Instead of many possible worlds and nuanced probability distributions to remember and gum up your thoughts, you can deal with simple, clear, unambiguous facts.

My previous cowboy-epistemologist self might have agreed with everything written here, but failed to really get that uncertainty is bad. Having just been empowered to deal with uncertainty properly, there was a tendency to not just be unafraid of uncertainty, but to think that it was OK, or even glorify it. What I'm trying to convey here is that that aesthetic is mistaken, and as silly as it feels to have to repeat something so elementary, uncertainty is to be avoided. More viscerally, uncertainty is uncool (unjustified confidence is even less cool, though.)

So what's this all got to do with my father's classical methods? I still very much recommend thinking in terms of probability theory when working on a problem; it is, after all, the best basis for epistemology that we know of, and is perfectly adequate as an intuitive framework. It's just that it's expensive, and in the epistemic state you really want to be in, that expense is redundant in the sense that you can just use some simpler method that converges to the Bayesian answer.

I could leave you with an overwhelming pile of examples, but I have no particular incentive to crush your uncertainty, so I'll just remind you to treat hypotheses like zombies; always double tap.

A question about utilitarianism and selfishness.

-2 abcd_z 29 September 2013 01:03AM

Utilitarianism seems to indicate that the greatest good for the most people generally revolves around their feelings.  A person feeling happy and confident is a desired state, a person in pain and misery is undesirable.

But what about taking selfish actions that hurt another person's feelings?  If I'm in a relationship and breaking up with her would hurt her feelings, does that mean I have a moral obligation to stay with her?  If I have an employee who is well-meaning but isn't working out, am I morally allowed to fire him?  Or what about at a club?  A guy is talking to a woman, and she's ready to go home with him.  I could socially tool him and take her home myself, but doing so would cause him greater unhappiness than I would have felt if I'd left them alone.

In a nutshell, does utilitarianism state that I am morally obliged to curb my selfish desires so that other people can be happy?

Making Fun of Things is Easy

32 katydee 27 September 2013 03:10AM

Making fun of things is actually really easy if you try even a little bit. Nearly anything can be made fun of, and in practice nearly anything is made fun of. This is concerning for several reasons.

First, if you are trying to do something, whether or not people are making fun of it is not necessarily a good signal as to whether or not it's actually good. A lot of good things get made fun of. A lot of bad things get made fun of. Thus, whether or not something gets made fun of is not necessarily a good indicator of whether or not it's actually good.[1] Optimally, only bad things would get made fun of, making it easy to determine what is good and bad - but this doesn't appear to be the case.

Second, if you want to make something sound bad, it's really easy. If you don't believe this, just take a politician or organization that you like and search for some criticism of it. It should generally be trivial to find people that are making fun of it for reasons that would sound compelling to a casual observer - even if those reasons aren't actually good. But a casual observer doesn't know that and thus can easily be fooled.[2]

Further, the fact that it's easy to make fun of things makes it so that a clever person can find themselves unnecessarily contemptuous of anything and everything. This sort of premature cynicism tends to be a failure mode I've noticed in many otherwise very intelligent people. Finding faults with things is pretty trivial, but you can quickly go from "it's easy to find faults with everything" to "everything is bad." This tends to be an undesirable mode of thinking - even if true, it's not particularly helpful.

[1] Whether or not something gets made fun of by the right people is a better indicator. That said, if you know who the right people are you usually have access to much more reliable methods.

[2] If you're still not convinced, take a politician or organization that you do like and really truly try to write an argument against that politician or organization. Note that this might actually change your opinion, so be warned.

Use Your Identity Carefully

76 Ben_LandauTaylor 22 August 2013 01:14AM

 

In Keep Your Identity Small, Paul Graham argues against associating yourself with labels (i.e. “libertarian,” “feminist,” “gamer,” “American”) because labels constrain what you’ll let yourself believe. It’s a wonderful essay that’s led me to make concrete changes in my life. That said, it’s only about 90% correct. I have two issues with Graham’s argument; one is a semantic quibble, but it leads into the bigger issue, which is a tactic I’ve used to become a better person.

Graham talks about the importance of identity in determining beliefs. This isn’t quite the right framework. I’m a fanatical consequentialist, so I care what actions people take. Beliefs can constrain actions, but identity can also constrain actions directly.

To give a trivial example from the past week in which beliefs didn’t matter: I had a self-image as someone who didn’t wear jeans or t-shirts. As it happens, there are times when wearing jeans is completely fine, and when other people wore jeans in casual settings, I knew it was appropriate. Nevertheless, I wasn’t able to act on this belief because of my identity. (I finally realized this was silly, consciously discarded that useless bit of identity, and made a point of wearing jeans to a social event.)

Why is this distinction important? If we’re looking at identify from an action-centered framework, this recommends a different approach from Graham’s.

Do you want to constrain your beliefs? No; you want to go wherever the evidence pushes you. “If X is true, I desire to believe that X is true. If X is not true, I desire to believe that X is not true.” Identity will only get in the way.

Do you want to constrain your actions? Yes! Ten thousand times yes! Akrasia exists. Commitment devices are useful. Beeminder is successful. Identity is one of the most effective tools for the job, if you wield it deliberately.

I’ve cultivated an identity as a person who makes events happen. It took months to instill, but now, when I think “I wish people were doing X,” I instinctively start putting together a group to do X. This manifests in minor ways, like the tree-climbing expedition I put together at the Effective Altruism Summit, and in big ways, like the megameetup we held in Boston. If I hadn’t used my identity to motivate myself, neither of those things would’ve happened, and my life would be poorer.

Identity is powerful. Powerful things are dangerous, like backhoes and bandsaws. People use them anyway, because sometimes they’re the best tools for the job, and because safety precautions can minimize the danger.

Identity is hard to change. Identity can be difficult to notice. Identity has unintended consequences. Use this tool only after careful deliberation. What would this identity do to your actions? What would it do to your beliefs? What social consequences would it have? Can you do the same thing with a less dangerous tool? Think twice, and then think again, before you add to your identity. Most identities are a hindrance.

But please, don’t discard this tool just because some things might go wrong. If you are willful, and careful, and wise, then you can cultivate the identity of the person you always wanted to be.

What makes us think _any_ of our terminal values aren't based on a misunderstanding of reality?

17 bokov 25 September 2013 11:09PM

Let's say Bob's terminal value is to travel back in time and ride a dinosaur.

It is instrumentally rational for Bob to study physics so he can learn how to build a time machine. As he learns more physics, Bob realizes that his terminal value is not only utterly impossible but meaningless. By definition, someone in Bob's past riding a dinosaur is not a future evolution of the present Bob.

There are a number of ways to create the subjective experience of having gone into the past and ridden a dinosaur. But to Bob, it's not the same because he wanted both the subjective experience and the knowledge that it corresponded to objective fact. Without the latter, he might as well have just watched a movie or played a video game.

So if we took the original, innocent-of-physics Bob and somehow calculated his coherent extrapolated volition, we would end up with a Bob who has given up on time travel. The original Bob would not want to be this Bob.

But, how do we know that _anything_ we value won't similarly dissolve under sufficiently thorough deconstruction? Let's suppose for a minute that all "human values" are dangling units; that everything we want is as possible and makes as much sense as wanting to hear the sound of blue or taste the flavor of a prime number. What is the rational course of action in such a situation?

PS: If your response resembles "keep attempting to XXX anyway", please explain what privileges XXX over any number of other alternatives other than your current preference. Are you using some kind of pre-commitment strategy to a subset of your current goals? Do you now wish you had used the same strategy to precommit to goals you had when you were a toddler?

View more: Prev | Next