Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Fighting Biases and Bad Habits like Boggarts

4 palladias 21 August 2014 05:07PM

TL;DR: Building humor into your habits for spotting and correcting errors makes the fix more enjoyable, easier to talk about and receive social support, and limits the danger of a contempt spiral. 

 

One of the most reliably bad decisions I've made on a regular basis is the choice to stay awake (well, "awake") and on the internet past the point where I can get work done, or even have much fun.  I went through a spell where I even fell asleep on the couch more nights than not, unable to muster the will or judgement to get up and go downstairs to bed.

I could remember (even sometimes in the moment) that this was a bad pattern, but, the more tired I was, the more tempting it was to think that I should just buckle down and apply more willpower to be more awake and get more out of my computer time.  Going to bed was a solution, but it was hard for it not to feel (to my sleepy brain and my normal one) like a bit of a cop out.

Only two things helped me really keep this failure mode in check.  One was setting a hard bedtime (and beeminding it) as part of my sacrifice for Advent.   But the other key tool (which has lasted me long past Advent) is the gif below.

sleep eating ice cream

The poor kid struggling to eat his ice cream cone, even in the face of his exhaustion, is hilarious.  And not too far off the portrait of me around 2am scrolling through my Feedly.

Thinking about how stupid or ineffective or insufficiently strong-willed I'm being makes it hard for me to do anything that feels like a retreat from my current course of action.  I want to master the situation and prove I'm stronger.  But catching on to the fact that my current situation (of my own making or not) is ridiculous, makes it easier to laugh, shrug, and move on.

I think the difference is that it's easy for me to feel contemptuous of myself when frustrated, and easy to feel fond when amused.

I've tried to strike the new emotional tone when I'm working on catching and correcting other errors.  (e.g "Stupid, you should have known to leave more time to make the appointment!  Planning fallacy!"  becomes "Heh, I guess you thought that adding two "trivially short" errands was a closed set, and must remain 'trivially short.'  That's a pretty silly error.")

In the first case, noticing and correcting an error feels punitive, since it's quickly followed by a hefty dose of flagellation, but the second comes with a quick laugh and a easier shift to a growth mindset framing.  Funny stories about errors are also easier to tell, increasing the chance my friends can help catch me out next time, or that I'll be better at spotting the error just by keeping it fresh in my memory. Not to mention, in order to get the joke, I tend to look for a more specific cause of the error than stupid/lazy/etc.

As far as I can tell, it also helps that amusement is a pretty different feeling than the ones that tend to be active when I'm falling into error (frustration, anger, feeling trapped, impatience, etc).  So, for a couple of seconds at least, I'm out of the rut and now need to actively return to it to stay stuck. 

In the heat of the moment of anger/akrasia/etc is a bad time to figure out what's funny, but, if you're reflecting on your errors after the fact, in a moment of consolation, it's easier to go back armed with a helpful reframing, ready to cast Riddikulus!

 

Crossposted from my personal blog, Unequally Yoked.

Another type of intelligence explosion

5 Stuart_Armstrong 21 August 2014 02:49PM

I've argued that we might have to worry about dangerous non-general intelligences. In a series of back and forth with Wei Dai, we agreed that some level of general intelligence (such as that humans seem to possess) seemed to be a great advantage, though possibly one with diminishing returns. Therefore a dangerous AI could be one with great narrow intelligence in one area, and a little bit of general intelligence in others.

The traditional view of an intelligence explosion is that of an AI that knows how to do X, suddenly getting (much) better at doing X, to a level beyond human capacity. Call this the gain of aptitude intelligence explosion. We can prepare for that, maybe, by tracking the AI's ability level and seeing if it shoots up.

But the example above hints at another kind of potentially dangerous intelligence explosion. That of a very intelligent but narrow AI that suddenly gains intelligence across other domains. Call this the gain of function intelligence explosion. If we're not looking specifically for it, it may not trigger any warnings - the AI might still be dumber than the average human in other domains. But this might be enough, when combined with its narrow superintelligence, to make it deadly. We can't ignore the toaster that starts babbling.

An example of deadly non-general AI

3 Stuart_Armstrong 21 August 2014 02:15PM

In a previous post, I mused that we might be focusing too much on general intelligences, and that the route to powerful and dangerous intelligences might go through much more specialised intelligences instead. Since it's easier to reason with an example, here is a potentially deadly narrow AI (partially due to Toby Ord). Feel free to comment and improve on it, or suggest you own example.

It's the standard "pathological goal AI" but only a narrow intelligence. Imagine a medicine designing super-AI with the goal of reducing human mortality in 50 years - i.e. massively reducing human population in the next 49 years. It's a narrow intelligence, so it has access only to a huge amount of human biological and epidemiological research. It must gets its drugs past FDA approval; this requirement is encoded as certain physical reactions (no death, some health improvements) to people taking the drugs over the course of a few years.

Then it seems trivial for it to design a drug that would have no negative impact for the first few years, and then causes sterility or death. Since it wants to spread this to as many humans as possible, it would probably design something that interacted with common human pathogens - colds, flues - in order to spread the impact, rather than affecting only those that took the disease.

Now, this narrow intelligence is less threatening than if it had general intelligence - where it could also plan for possible human countermeasures and such - but it seems sufficiently dangerous on its own that we can't afford to worry only about general intelligences. Some of the "AI superpowers" that Nick mentions in his book (intelligence amplification, strategizing, social manipulation, hacking, technology research, economic productivity) could be enough to cause devastation on their own, even if the AI never developed other abilities.

We still could be destroyed by a machine that we outmatch in almost every area.

Why we should err in both directions

5 owencb 21 August 2014 11:10AM

Crossposted from the Global Priorities Project

This is an introduction to the principle that when we are making decisions under uncertainty, we should choose so that we may err in either direction. We justify the principle, explore the relation with Umeshisms, and look at applications in priority-setting.

Some trade-offs

How much should you spend on your bike lock? A cheaper lock saves you money at the cost of security.

How long should you spend weighing up which charity to donate to before choosing one? Longer means less time for doing other useful things, but you’re more likely to make a good choice.

How early should you aim to arrive at the station for your train? Earlier means less chance of missing it, but more time hanging around at the station.

Should you be willing to undertake risky projects, or stick only to safe ones? The safer your threshold, the more confident you can be that you won’t waste resources, but some of the best opportunities may have a degree of risk, and you might be able to achieve a lot more with a weaker constraint.

The principle

We face trade-offs and make judgements all the time, and inevitably we sometimes make bad calls. In some cases we should have known better; sometimes we are just unlucky. As well as trying to make fewer mistakes, we should try to minimise the damage from the mistakes that we do make.

Here’s a rule which can be useful in helping you do this:

When making decisions that lie along a spectrum, you should choose so that you think you have some chance of being off from the best choice in each direction.

We could call this principle erring in both directions. It might seem counterintuitive -- isn’t it worse to not even know what direction you’re wrong in? -- but it’s based on some fairly straightforward economics. I give a non-technical sketch of a proof at the end, but the essence is: if you’re not going to be perfect, you want to be close to perfect, and this is best achieved by putting your actual choice near the middle of your error bar.

So the principle suggests that you should aim to arrive at the station with a bit of time wasted, but not so much that you won’t miss the train even if something goes wrong.

Refinements

Just saying that you should have some chance of erring in either direction isn’t enough to tell you what you should actually choose. It can be a useful warning sign in the cases where you’re going substantially wrong, though, and as these are the most important cases to fix it has some use in this form.

A more careful analysis would tell you that at the best point on the spectrum, a small change in your decision produces about as much expected benefit as expected cost. In ideal circumstances we can use this to work out exactly where on the spectrum we should be (in some cases more than one point may fit this, so you need to compare them directly). In practice it is often hard to estimate the marginal benefits and costs well enough for this to be useful approach. So although it is theoretically optimal, you will only sometimes want to try to apply this version.

Say in our train example that you found missing the train as bad as 100 minutes waiting at the station. Then you want to leave time so that an extra minute of safety margin gives you a 1% reduction in the absolute chance of missing the train.

For instance, say your options in the train case look like this:

Safety margin (min) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Chance of missing train (%) 50 30 15 8 5 3 2 1.5 1.1 0.8 0.6 0.4 0.3 0.2 0.1

Then the optimal safety margin to leave is somewhere between 6 and 7 minutes: this is where the marginal minute leads to a 1% reduction in the chance of missing the train.

Predictions and track records

So far, we've phrased the idea in terms of the predicted outcomes of actions. Another more well-known perspective on the idea looks at events that have already happened. For example:

These formulations, dubbed 'Umeshisms', only work for decisions that you make multiple times, so that you can gather a track record.

An advantage of applying the principle to track records is that it’s more obvious when you’re going wrong. Introspection can be hard.

You can even apply the principle to track records of decisions which don’t look like they are choosing from a spectrum. For example it is given as advice in the game of bridge: if you don’t sometimes double the stakes on hands which eventually go against you, you’re not doubling enough. Although doubling or not is a binary choice, erring in both directions still works because ‘how often to do double’ is a trait that roughly falls on a spectrum.

Failures

There are some circumstances where the principle may not apply.

First, if you think the correct point is at one extreme of the available spectrum. For instance nobody says ‘if you’re not worried about going to jail, you’re not committing enough armed robberies’, because we think the best number of armed robberies to commit is probably zero.

Second, if the available points in the spectrum are discrete and few in number. Take the example of the bike locks. Perhaps there are only three options available: the Cheap-o lock (£5), the Regular lock (£20), and the Super lock (£50). You might reasonably decide on the Regular lock, thinking that maybe the Super lock is better, but that the Cheap-o one certainly isn’t. When you buy the Regular lock, you’re pretty sure you’re not buying a lock that’s too tough. But since only two of the locks are good candidates, there is no decision you could make which tries to err in both directions.

Third, in the case of evaluating track records, it may be that your record isn’t long enough to expect to have seen errors in both directions, even if they should both come up eventually. If you haven’t flown that many times, you could well be spending the right amount of time -- or even too little -- in airports, even if you’ve never missed a flight.

Finally, a warning about a case where the principle is not supposed to apply. It shouldn’t be applied directly to try to equalise the probability of being wrong in either direction, without taking any account of magnitude of loss. So for example if someone says you should err on the side of caution by getting an early train to your job interview, it might look as though that were in conflict with the idea of erring in both directions. But normally what’s meant is that you should have a higher probability of failing in one direction (wasting time by taking an earlier train than needed), because the consequences of failing in the other direction (missing the interview) are much higher.

Conclusions and applications to prioritisation

Seeking to err in both directions can provide a useful tool in helping to form better judgements in uncertain situations. Many people may already have internalised key points, but it can be useful to have a label to facilitate discussion. Additionally, having a clear principle can help you to apply it in cases where you might not have noticed it was relevant.

How might this principle apply to priority-setting? It suggests that:

  • You should spend enough time and resources on the prioritisation itself that you think some of time may have been wasted (for example you should spend a while at the end without changing your mind much), but not so much that you are totally confident you have the right answer.
  • If you are unsure what discount rate to use, you should choose one so that you think that it could be either too high or too low.
  • If you don’t know how strongly to weigh fragile cost-effectiveness estimates against more robust evidence, you should choose a level so that you might be over- or under-weighing them.
  • When you are providing a best-guess estimate, you should choose a figure which could plausibly be wrong either way.

And one on track records:

  • Suppose you’ve made lots of grants. Then if you’ve never backed a project which has failed, you’re probably too risk-averse in your grantmaking.

Questions for readers

Do you know any other useful applications of this idea? Do you know anywhere where it seems to break? Can anyone work out easier-to-apply versions, and the circumstances in which they are valid?

Appendix: a sketch proof of the principle

Assume the true graph of value (on the vertical axis) against the decision you make (on the horizontal axis, representing the spectrum) is smooth, looking something like this:   pic

The highest value is achieved at d, so this is where you’d like to be. But assume you don’t know quite where d is. Say your best guess is that d=g. But you think it’s quite possible that d>g, and quite unlikely that d<g. Should you choose g?

Suppose we compare g to g’, which is just a little bit bigger than g. If d>g, then switching from g to g’ would be moving up the slope on the left of the diagram, which is an improvement. If d=g then it would be better to stick with g, but it doesn’t make so much difference because the curve is fairly flat at the top. And if g were bigger than d, we’d be moving down the slope on the right of the diagram, which is worse for g’ -- but this scenario was deemed unlikely.

Aggregating the three possibilities, we found that two of them were better for sticking with g, but in one of these (d=g) it didn’t matter very much, and the other (d<g) just wasn’t very likely. In contrast, the third case (d>g) was reasonably likely, and noticeably better for g’ than g. So overall we should prefer g’ to g.

In fact we’d want to continue moving until the marginal upside from going slightly higher was equal to the marginal downside; this would have to involve a non-trivial chance that we are going too high. So our choice should have a chance of failure in either direction. This completes the (sketch) proof.

Note: There was an assumption of smoothness in this argument. I suspect it may be possible to get slightly stronger conclusions or work from slightly weaker assumptions, but I’m not certain what the most general form of this argument is. It is often easier to build a careful argument in specific cases.

Acknowledgements: thanks to Ryan Carey, Max Dalton, and Toby Ord for useful comments and suggestions.

Productivity thoughts from Matt Fallshaw

6 John_Maxwell_IV 21 August 2014 05:05AM

At the 2014 Effective Altruism Summit in Berkeley a few weeks ago, I had the pleasure of talking to Matt Fallshaw about the things he does to be more effective.  Matt is a founder of Trike Apps (the consultancy that built Less Wrong), a founder of Bellroy, and a polyphasic sleeper.  Notes on our conversation follow.

Matt recommends having a system for acquiring habits.  He recommends separating collection from processing; that is, if you have an idea for a new habit you want to acquire, you should record the idea at the time you have it and then think about actually implementing it at some future time.  Matt recommends doing this through a weekly review.  He recommends vetting your collection to see what habits seem actually worth acquiring, then for those habits you actually want to acquire, coming up with a compassionate, reasonable plan for how you're going to acquire the habit.

(Previously on LW: How habits work and how you may control themCommon failure modes in habit formation.)

The most difficult kind of habit for me to acquire is that of random-access situation-response habits, e.g. "if I'm having a hard time focusing, read my notebook entry that lists techniques for improving focus".  So I asked Matt if he had any habit formation advice for this particular situation.  Matt recommended trying to actually execute the habit I wanted as many times as possible, even in an artificial context.  Steve Pavlina describes the technique here.  Matt recommends making your habit execution as emotionally salient as possible.  His example: Let's say you're trying to become less of a prick.  Someone starts a conversation with you and you notice yourself experiencing the kind of emotions you experience before you start acting like a prick.  So you spend several minutes explaining to them the episode of disagreeableness you felt coming on and how you're trying to become less of a prick before proceeding with the conversation.  If all else fails, Matt recommends setting a recurring alarm on your phone that reminds you of the habit you're trying to acquire, although he acknowledges that this can be expensive.

Part of your plan should include a check to make sure you actually stick with your new habit.  But you don't want a check that's overly intrusive.  Matt recommends keeping an Anki deck with a card for each of your habits.  Then during your weekly review session, you can review the cards Anki recommends for you.  For each card, you can rate the degree to which you've been sticking with the habit it refers to and do something to revitalize the habit if you haven't been executing it.  Matt recommends writing the cards in a form of a concrete question, e.g. for a speed reading habit, a question could be "Did you speed read the last 5 things you read?"  If you haven't been executing a particular habit, check to see if it has a clear, identifiable trigger.

Ideally your weekly review will come at a time you feel particularly "agenty" (see also: Reflective Control).  So you may wish to schedule it at a time during the week when you tend to feel especially effective and energetic.  Consuming caffeine before your weekly review is another idea.

When running in to seemingly intractable problems related to your personal effectiveness, habits, etc., Matt recommends taking a step back to brainstorm and try to think of creative solutions.  He says that oftentimes people will write off a task as "impossible" if they aren't able to come up with a solution in 30 seconds.  He recommends setting a 5-minute timer.

In terms of habits worth acquiring, Matt is a fan of speed reading, Getting Things Done, and the Theory of Constraints (especially useful for larger projects).

Matt has found that through aggressive habit acquisition, he's been able to experience a sort of compound return on the habits he's acquired: by acquiring habits that give him additional time and mental energy, he's been able to reinvest some of that additional time and mental energy in to the acquisition of even more useful habits.  Matt doesn't think he's especially smart or high-willpower relative to the average person in the Less Wrong community, and credits this compounding for the reputation he's acquired for being a badass.

Anthropics doesn't explain why the Cold War stayed Cold

3 KnaveOfAllTrades 20 August 2014 07:23PM

(Epistemic status: There are some lines of argument that I haven’t even started here, which potentially defeat the thesis advocated here. I don’t go into them because this is already too long or I can’t explain them adequately without derailing the main thesis. Similarly some continuations of chains of argument and counterargument begun here are terminated in the interest of focussing on the lower-order counterarguments. Overall this piece probably overstates my confidence in its thesis. It is quite possible this post will be torn to pieces in the comments—possibly by my own aforementioned elided considerations. That’s good too.)

I

George VI, King of the United Kingdom, had five siblings. That is, the father of current Queen Elizabeth II had as many siblings as on a typical human hand. (This paragraph is true, and is not a trick; in particular, the second sentence of this paragraph really is trying to disambiguate and help convey the fact in question and relate it to prior knowledge, rather than introduce an opening for some sleight of hand so I can laugh at you later, or whatever fear such a suspiciously simple proposition might engender.)

Let it be known.

II

Exactly one of the following stories is true:

Story One

Recently I hopped on Facebook and saw the following post:

“I notice that I am confused about why a nuclear war never occurred. Like, I think (knowing only the very little I know now) that if you had asked me, at the start of the Cold War or something, the probability that it would eventually lead to a nuclear war, I would've said it was moderately likely. So what's up with that?”


The post had 14 likes. In the comments, the most-Liked explanation was:

“anthropically you are considerably more likely to live in a world where there never was a fullscale nuclear war”

That comment had 17 Likes. The second-most-liked comment that offered an explanation had 4 Likes.

Story Two

continue reading »

Thought experiments on simplicity in logical probability

1 Manfred 20 August 2014 05:25PM

A common feature of many proposed logical priors is a preference for simple sentences over complex ones. This is sort of like an extension of Occam's razor into math. Simple things are more likely to be true. So, as it is said, "why not?"

 

Well, the analogy has some wrinkles - unlike hypothetical rules for the world, logical sentences do not form a mutually exclusive set. Instead, for every sentence A there is a sentence not-A with pretty much the same complexity, and probability 1-P(A). So you can't make the probability smaller for all complex sentences, because their negations are also complex sentences! If you don't have any information that discriminates between them, A and not-A will both get probability 1/2 no matter how complex they get.

But if our agent knows something that breaks the symmetry between A and not-A, like that A belongs to a mutually exclusive and exhaustive set of sentences with differing complexities, then it can assign higher probabilities to simpler sentences in this set without breaking the rules of probability. Except, perhaps, the rule about not making up information.

The question: is the simpler answer really more likely to be true than the more complicated answer, or is this just a delusion? If so, is it for some ontologically basic reason, or for a contingent and explainable reason?

 

There are two complications to draw your attention to. The first is in what we mean by complexity. Although it would be nice to use the Kolmogorov complexity of any sentence, which is the length of the shortest program that prints the sentence, such a thing is uncomputable by the kind of agent we want to build in the real world. The only thing our real-world agent is assured of seeing is the length of the sentence as-is. We can also find something in between Kolmogorov complexity and length by doing a brief search for short programs that print the sentence - this meaning is what is usually meant in this article, and I'll call it "apparent complexity."

The second complication is in what exactly a simplicity prior is supposed to look like. In the case of Solomonoff induction the shape is exponential - more complicated hypotheses are exponentially less likely. But why not a power law? Why not even a Poisson distribution? Does the difficulty of answering this question mean that thinking that simpler sentences are more likely is a delusion after all?

 

Thought experiments:

1: Suppose our agent knew from a trusted source that some extremely complicated sum could only be equal to A, or to B, or to C, which are three expressions of differing complexity. What are the probabilities?

 

Commentary: This is the most sparse form of the question. Not very helpful regarding the "why," but handy to stake out the "what." Do the probabilities follow a nice exponential curve? A power law? Or, since there are just the three known options, do they get equal consideration?

This is all based off intuition, of course. What does intuition say when various knobs of this situation are tweaked - if the sum is of unknown complexity, or of complexity about that of C? If there are a hundred options, or countably many? Intuitively speaking, does it seem like favoring simpler sentences is an ontologically basic part of your logical prior?

 

2: Consider subsequences of the digits of pi. If I give you a pair (n,m), you can tell me the m digits following the nth digit of pi. So if I start a sentence like "the subsequence of digits of pi (10100, 102) = ", do you expect to see simpler strings of digits on the right side? Is this a testable prediction about the properties of pi?

 

Commentary: We know that there is always a short-ish program to produce the sequences, which is just to compute the relevant digits of pi. This sets a hard upper bound on the possible Kolmogorov complexity of sequences of pi (that grows logarithmically as you increase m and n), and past a certain m this will genuinely start restricting complicated sequences, and thus favoring "all zeros" - or does it?

After all, this is weak tea compared to an exponential simplicity prior, for which the all-zero sequence would be hojillions of times more likely than a messy one. On the other hand, an exponential curve allows sequences with higher Kolmogorov complexity than the computation of the digits of pi.

Does the low-level view outlined in the first paragraph above demonstrate that the exponential prior is bunk? Or can you derive one from the other with appropriate simplifications (keeping in mind Komogorov complexity vs. apparent complexity)? Does pi really contain more long simple strings than expected, and if not what's going on with our prior?

 

3: Suppose I am writing an expression that I want to equal some number you know - that is, the sentence "my expression = your number" should be true. If I tell you the complexity of my expression, what can you infer about the likelihood of the above sentence?

 

Commentary: If we had access to Kolmogorov complexity of your number, then we could completely rule out answers that were too K-simple to work. With only an approximation, it seems like we can still say that simple answers are less likely up to a point. Then as my expression gets more and more complicated, there are more and more available wrong answers (and, outside of the system a bit, it becomes less and less likely that I know what I'm doing), and so probability goes down.

In the limit that my expression is much more complex than your number, does an elegant exponential distribution emerge from underlying considerations?

Polling Thread

5 Gunnar_Zarncke 20 August 2014 02:36PM

The next installment of the Polling Thread.

This is your chance to ask your multiple choice question you always wanted to throw in. Get qualified numeric feedback to your comments. Post fun polls.

These are the rules:

  1. Each poll goes into its own top level comment and may be commented there.
  2. You must at least vote all polls that were posted earlier than you own. This ensures participation in all polls and also limits the total number of polls. You may of course vote without posting a poll.
  3. Your poll should include a 'don't know' option (to avoid conflict with 2). I don't know whether we need to add a troll catch option here but we will see.

If you don't know how to make a poll in a comment look at the Poll Markup Help.


This is a somewhat regular thread. If it is successful I may post again. Or you may. In that case do the following :

  • Use "Polling Thread" in the title.
  • Copy the rules.
  • Add the tag "poll".
  • Link to this Thread or a previous Thread.
  • Create a top-level comment saying 'Discussion of this thread goes here; all other top-level comments should be polls or similar'
  • Add a second top-level comment with an initial poll to start participation.

"Follow your dreams" as a case study in incorrect thinking

16 cousin_it 20 August 2014 01:18PM

This post doesn't contain any new ideas that LWers don't already know. It's more of an attempt to organize my thoughts and have a writeup for future reference.

Here's a great quote from Sam Hughes, giving some examples of good and bad advice:

"You and your gaggle of girlfriends had a saying at university," he tells her. "'Drink through it'. Breakups, hangovers, finals. I have never encountered a shorter, worse, more densely bad piece of advice." Next he goes into their bedroom for a moment. He returns with four running shoes. "You did the right thing by waiting for me. Probably the first right thing you've done in the last twenty-four hours. I subscribe, as you know, to a different mantra. So we're going to run."

The typical advice given to young people who want to succeed in highly competitive areas, like sports, writing, music, or making video games, is to "follow your dreams". I think that advice is up there with "drink through it" in terms of sheer destructive potential. If it was replaced with "don't bother following your dreams" every time it was uttered, the world might become a happier place.

The amazing thing about "follow your dreams" is that thinking about it uncovers a sort of perfect storm of biases. It's fractally wrong, like PHP, where the big picture is wrong and every small piece is also wrong in its own unique way.

The big culprit is, of course, optimism bias due to perceived control. I will succeed because I'm me, the special person at the center of my experience. That's the same bias that leads us to overestimate our chances of finishing the thesis on time, or having a successful marriage, or any number of other things. Thankfully, we have a really good debiasing technique for this particular bias, known as reference class forecasting, or inside vs outside view. What if your friend Bob was a slightly better guitar player than you? Would you bet a lot of money on Bob making it big like Jimi Hendrix? The question is laughable, but then so is betting the years of your own life, with a smaller chance of success than Bob.

That still leaves many questions unanswered, though. Why do people offer such advice in the first place, why do other people follow it, and what can be done about it?

Survivorship bias is one big reason we constantly hear successful people telling us to "follow our dreams". Successful people doesn't really know why they are successful, so they attribute it to their hard work and not giving up. The media amplifies that message, while millions of failures go unreported because they're not celebrities, even though they try just as hard. So we hear about successes disproportionately, in comparison to how often they actually happen, and that colors our expectations of our own future success. Sadly, I don't know of any good debiasing techniques for this error, other than just reminding yourself that it's an error.

When someone has invested a lot of time and effort into following their dream, it feels harder to give up due to the sunk cost fallacy. That happens even with very stupid dreams, like the dream of winning at the casino, that were obviously installed by someone else for their own profit. So when you feel convinced that you'll eventually make it big in writing or music, you can remind yourself that compulsive gamblers feel the same way, and that feeling something doesn't make it true.

Of course there are good dreams and bad dreams. Some people have dreams that don't tease them for years with empty promises, but actually start paying off in a predictable time frame. The main difference between the two kinds of dream is the difference between positive-sum games, a.k.a. productive occupations, and zero-sum games, a.k.a. popularity contests. Sebastian Marshall's post Positive Sum Games Don't Require Natural Talent makes the same point, and advises you to choose a game where you can be successful without outcompeting 99% of other players.

The really interesting question to me right now is, what sets someone on the path of investing everything in a hopeless dream? Maybe it's a small success at an early age, followed by some random encouragement from others, and then you're locked in. Is there any hope for thinking back to that moment, or set of moments, and making a little twist to put yourself on a happier path? I usually don't advise people to change their desires, but in this case it seems to be the right thing to do.

Steelmanning MIRI critics

4 fowlertm 19 August 2014 03:14AM

I'm giving a talk to the Boulder Future Salon in Boulder, Colorado in a few weeks on the Intelligence Explosion hypothesis. I've given it once before in Korea but I think the crowd I'm addressing will be more savvy than the last one (many of them have met Eliezer personally). It could end up being important, so I was wondering if anyone considers themselves especially capable of playing Devil's Advocate so I could shape up a bit before my talk? I'd like there to be no real surprises. 

I'd be up for just messaging back and forth or skyping, whatever is convenient.

Quantified Risks of Gay Male Sex

21 pianoforte611 18 August 2014 11:55PM

If you are a gay male then you’ve probably worried at one point about sexually transmitted diseases. Indeed men who have sex with men have some of the highest prevalence of many of these diseases. And if you’re not a gay male, you’ve probably still thought about STDs at one point. But how much should you worry? There are many organizations and resources that will tell you to wear a condom, but very few will tell you the relative risks of wearing a condom vs not. I’d like to provide a concise summary of the risks associated with gay male sex and the extent to which these risks can be reduced. (See Mark Manson’s guide for a similar resources for heterosexual sex.). I will do so by first giving some information about each disease, including its prevalence among gay men. Most of this data will come from the US, but the US actually has an unusually high prevalence for many diseases. Certainly HIV is much less common in many parts of Europe. I will end with a case study of HIV, which will include an analysis of the probabilities of transmission broken down by the nature of sex act and a discussion of risk reduction techniques.

When dealing with risks associated with sex, there are few relevant parameters. The most common is the prevalence – the proportion of people in the population that have the disease. Since you can only get a disease from someone who has it, the prevalence is arguably the most important statistic. There are two more relevant statistics – the per act infectivity (the chance of contracting the disease after having sex once) and the per partner infectivity (the chance of contracting the disease after having sex with one partner for the duration of the relationship). As it turns out the latter two probabilities are very difficult to calculate. I only obtained those values for for HIV. It is especially difficult to determine per act risks for specific types of sex acts since many MSM engage in a variety of acts with multiple partners. Nevertheless estimates do exist and will explored in detail in the HIV case study section.

HIV

Prevalence: Between 13 - 28%. My guess is about 13%.

The most infamous of the STDs. There is no cure but it can be managed with anti-retroviral therapy. A commonly reported statistic is that 19% of MSM (men who have sex with men) in the US are HIV positive (1). For black MSM, this number was 28% and for white MSM this number was 16%. This is likely an overestimate, however, since the sample used was gay men who frequent bars and clubs. My estimate of 13% comes from CDC's total HIV prevalence in gay men of 590,000 (2) and their data suggesting that MSM comprise 2.9% of men in the US (3).

 

Gonorrhea

Prevalence: Between 9% and 15% in the US

This disease affects the throat and the genitals but it is treatable with antibiotics. The CDC estimates 15.5% prevalence (4). However, this is likely an overestimate since the sample used was gay men in health clinics. Another sample (in San Francisco health clinics) had a pharyngeal gonorrhea prevalence of 9% (5).

 

Syphilis

Prevalence: 0.825% in the US

 My estimate was calculated in the same manner as my estimate for HIV. I used the CDC's data (6). Syphilis is transmittable by oral and anal sex (7) and causes genital sores that may look harmless at first (8). Syphilis is curable with penicillin however the presence of sores increases the infectivity of HIV.

 

Herpes (HSV-1 and HSV-2)

Prevalence: HSV-2 - 18.4% (9); HSV-1 - ~75% based on Australian data  (10)

This disease is mostly asymptomatic and can be transmitted through oral or anal sex. Sometimes sores will appear and they will usually go away with time. For the same reason as syphilis, herpes can increase the chance of transmitting HIV. The estimate for HSV-1 is probably too high. Snowball sampling was used and most of the men recruited were heavily involved in organizations for gay men and were sexually active in the past 6 months. Also half of them reported unprotected anal sex in the past six months. The HSV-2 sample came from a random sample of US households (11).

 

Clamydia

Prevalence: Rectal - 0.5% - 2.3% ; Pharyngeal - 3.0 - 10.5% (12)

 Like herpes, it is often asymptomatic - perhaps as low as 10% of infected men report symptoms. It is curable with antibiotics.

 

HPV

Prevalence: 47.2% (13)

 This disease is incurable (though a vaccine exists for men and women) but usually asymptomatic. It is capable of causing cancers of the penis, throat and anus. Oddly there are no common tests for HPV in part because there are many strains (over 100) most of which are relatively harmless. Sometimes it goes away on its own (14). The prevalence rate was oddly difficult to find, the number I cited came from a sample of men from Brazil, Mexico and the US.

 

Case Study of HIV transmission; risks and strategies for reducing risk

 IMPORTANT: None of the following figures should be generalized to other diseases. Many of these numbers are not even the same order of magnitude as the numbers for other diseases. For example, HIV is especially difficult to transmit via oral sex, but Herpes can very easily be transmitted.

Unprotected Oral Sex per-act risk (with a positive partner or partner of unknown serostatus):

Non-zero but very small. Best guess .03% without condom (15)

 Unprotected Anal sex per-act risk (with positive partner): 

Receptive: 0.82% - 1.4% (16) (17)

                          Insertive Circumcised: 0.11% (18)

         Insertive Uncircumcised: 0.62% (18)

 Protected Anal sex per-act risk (with positive partner):  

  Estimates range from 2 times lower to twenty times lower (16)  (19) and the risk is highly dependent on the slippage and   breakage rate.


Contracting HIV from oral sex is very rare. In one study, 67 men reported performing oral sex on at least one HIV positive partner and none were infected (20). However, transmission is possible (15). Because instances of oral transmission of HIV are so rare, the risk is hard to calculate so should be taken with a grain of salt. The number cited was obtained from a group of individuals that were either HIV positive or high risk for HIV. The per act-risk with a positive partner is therefore probably somewhat higher.

 Note that different HIV positive men have different levels of infectivity hence the wide range of values for per-act probability of transmission. Some men with high viral loads (the amount of HIV in the blood) may have an infectivity of greater than 10% per unprotected anal sex act (17).

 

Risk reducing strategies

 Choosing sex acts that have a lower transmission rate (oral sex, protected insertive anal sex, non-insertive) is one way to reduce risk. Monogamy, testing, antiretroviral therapy, PEP and PrEP are five other ways.

 

Testing Your partner/ Monogamy

 If your partner tests negative then they are very unlikely to have HIV. There is a 0.047% chance of being HIV positive if they tested negative using a blood test and a 0.29% chance of being HIV positive if they tested negative using an oral test. If they did further tests then the chance is even lower. (See the section after the next paragraph for how these numbers were calculated).

 So if your partner tests negative, the real danger is not the test giving an incorrect result. The danger is that your partner was exposed to HIV before the test, but his body had not started to make antibodies yet. Since this can take weeks or months, it is possible for your partner who tested negative to still have HIV even if you are both completely monogamous.

 ____

For tests, the sensitivity - the probability that an HIV positive person will test positive - is 99.68% for blood tests (21), 98.03% with oral tests. The specificity - the probability that an HIV negative person will test negative - is 99.74% for oral tests and 99.91% for blood tests. Hence the probability that a person who tested negative will actually be positive is:

 P(Positive | tested negative) = P(Positive)*(1-sensitivity)/(P(Negative)*specificity + P(Positive)*(1-sensitivity)) = 0.047% for blood test, 0.29% for oral test

 Where P(Positive) = Prevalence of HIV, I estimated this to be 13%.

 However, according to a writer for About.com (22) - a doctor who works with HIV - there are often multiple tests which drive the sensitivity up to 99.997%.

 

Home Testing

Oraquick is an HIV test that you can purchase online and do yourself at home. It costs $39.99 for one kit. The sensitivity is 93.64%, the specificity is 99.87% (23). The probability that someone who tested negative will actually be HIV positive is 0.94%. - assuming a 13% prevalence for HIV. The same danger mentioned above applies - if the infection occurred recently the test would not detect it.

 

 Anti-Retroviral therapy

 Highly active anti-retroviral therapy (HAART), when successful, can reduce the viral load – the amount of HIV in the blood - to low or undetectable levels. Baggaley et. al (17) reports that in heterosexual couples, there have been some models relating viral load to infectivity. She applies these models to MSM and reports that the per-act risk for unprotected anal sex with a positive partner should be 0.061%. However, she notes that different models produce very different results thus this number should be taken with a grain of salt.

 

 Post-Exposure Prophylaxis (PEP)

 A last resort if you think you were exposed to HIV is to undergo post-exposure prophylaxis within 72 hours. Antiretroviral drugs are taken for about a month in the hopes of preventing the HIV from infecting any cells. In one case controlled study some health care workers who were exposed to HIV were given PEP and some were not, (this was not under the control of the experimenters). Workers that contracted HIV were less likely to have been given PEP with an odds ratio of 0.19 (24). I don’t know whether PEP is equally effective at mitigating risk from other sources of exposure.

 

 Pre-Exposure Prophylaxis (PrEP)

 This is a relatively new risk reduction strategy. Instead of taking anti-retroviral drugs after exposure, you take anti-retroviral drugs every day in order to prevent HIV infection. I could not find a per-act risk, but in a randomized controlled trial, MSM who took PrEP were less likely to become infected with HIV than men who did not (relative reduction  - 41%). The average number of sex partners was 18. For men who were more consistent and had a 90% adherence rate, the relative reduction was better - 73%. (25) (26).

1: http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5937a2.htm?s_cid=mm5937a2_w

2: http://www.cdc.gov/hiv/statistics/basics/ataglance.html

3: http://www.cdc.gov/nchs/data/ad/ad362.pdf

4: http://www.cdc.gov/std/stats10/msm.htm

5: http://cid.oxfordjournals.org/content/41/1/67.short

6: http://www.cdc.gov/std/syphilis/STDFact-MSM-Syphilis.htm

7: http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5341a2.htm

8: http://www.cdc.gov/std/syphilis/stdfact-syphilis.htm

9: http://journals.lww.com/stdjournal/Abstract/2010/06000/Men_Who_Have_Sex_With_Men_in_the_United_States_.13.aspx

10: http://jid.oxfordjournals.org/content/194/5/561.full

11: http://www.nber.org/nhanes/nhanes-III/docs/nchs/manuals/planop.pdf

12: http://www.cdc.gov/std/chlamydia/STDFact-Chlamydia-detailed.htm

13: http://jid.oxfordjournals.org/content/203/1/49.short

14: http://www.cdc.gov/std/hpv/stdfact-hpv-and-men.htm

15: http://journals.lww.com/aidsonline/pages/articleviewer.aspx?year=1998&issue=16000&article=00004&type=fulltext#P80

16: http://aje.oxfordjournals.org/content/150/3/306.short

17: http://ije.oxfordjournals.org/content/early/2010/04/20/ije.dyq057.full

18: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2852627/

19:

http://journals.lww.com/stdjournal/Fulltext/2002/01000/Reducing_the_Risk_of_Sexual_HIV_Transmission_.7.aspx

20:

http://journals.lww.com/aidsonline/Fulltext/2002/11220/Risk_of_HIV_infection_attributable_to_oral_sex.22.aspx

21: http://www.thelancet.com/journals/laninf/article/PIIS1473-3099%2811%2970368-1/abstract

22:

http://aids.about.com/od/hivpreventionquestions/f/How-Often-Do-False-Positive-And-False-Negative-Hiv-Test-Results-Occur.htm

23: http://www.ncbi.nlm.nih.gov/pubmed/18824617

24: http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD002835.pub3/abstract

25: http://www.nejm.org/doi/full/10.1056/Nejmoa1011205#t=articleResults

26: http://www.cmaj.ca/content/184/10/1153.short

Open thread, 18-24 August 2014

3 David_Gerard 18 August 2014 04:55PM

Previous open thread

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

The metaphor/myth of general intelligence

9 Stuart_Armstrong 18 August 2014 04:04PM

Thanks for Kaj for making me think along these lines.

It's agreed on this list that general intelligences - those that are capable of displaying high cognitive performance across a whole range of domains - are those that we need to be worrying about. This is rational: the most worrying AIs are those with truly general intelligences, and so those should be the focus of our worries and work.

But I'm wondering if we're overestimating the probability of general intelligences, and whether we shouldn't adjust against this.

First of all, the concept of general intelligence is a simple one - perhaps too simple. It's an intelligence that is generally "good" at everything, so we can collapse its various abilities across many domains into "it's intelligent", and leave it at that. It's significant to note that since the very beginning of the field, AI people have been thinking in terms of general intelligences.

And their expectations have been constantly frustrated. We've made great progress in narrow areas, very little in general intelligences. Chess was solved without "understanding"; Jeopardy! was defeated without general intelligence; cars can navigate our cluttered roads while being able to do little else. If we started with a prior in 1956 about the feasibility of general intelligence, then we should be adjusting that prior downwards.

But what do I mean by "feasibility of general intelligence"? There are several things this could mean, not least the ease with which such an intelligence could be constructed. But I'd prefer to look at another assumption: the idea that a general intelligence will really be formidable in multiple domains, and that one of the best ways of accomplishing a goal in a particular domain is to construct a general intelligence and let it specialise.

First of all, humans are very far from being general intelligences. We can solve a lot of problems when the problems are presented in particular, easy to understand formats that allow good human-style learning. But if we picked a random complicated Turing machine from the space of such machines, we'd probably be pretty hopeless at predicting its behaviour. We would probably score very low on the scale of intelligence used to construct the AIXI. The general intelligence, "g", is a misnomer - it designates the fact that the various human intelligences are correlated, not that humans are generally intelligent across all domains.

Humans with computers, and humans in societies and organisations, are certainly closer to general intelligences than individual humans. But institutions have their own blind spots and weakness, as does the human-computer combination. Now, there are various reasons advanced for why this is the case - game theory and incentives for institutions, human-computer interfaces and misunderstandings for the second example. But what if these reasons, and other ones we can come up with, were mere symptoms of a more universal problem: that generalising intelligence is actually very hard?

There are no free lunch theorems that show that no computable intelligences can perform well in all environments. As far as they go, these theorems are uninteresting, as we don't need intelligences that perform well in all environments, just in almost all/most. But what if a more general restrictive theorem were true? What if it was very hard to produce an intelligence that was of high performance across many domains? What if the performance of a generalist was pitifully inadequate as compared with a specialist. What if every computable version of AIXI was actually doomed to poor performance?

There are a few strong counters to this - for instance, you could construct good generalists by networking together specialists (this is my standard mental image/argument for AI risk), you could construct an entity that was very good at programming specific sub-programs, or you could approximate AIXI. But we are making some assumptions here - namely, that we can network together very different intelligences (the human-computer interfaces hints at some of the problems), and that a general programming ability can even exist in the first place (for a start, it might require a general understanding of problems that is akin to general intelligence in the first place). And we haven't had great success building effective AIXI approximations so far (which should reduce, possibly slightly, our belief that effective general intelligences are possible).

Now, I remain convinced that general intelligence is possible, and that it's worthy of the most worry. But I think it's worth inspecting the concept more closely, and at least be open to the possibility that general intelligence might be a lot harder than we imagine.

EDIT: Model/example of what a lack of general intelligence could look like.

Imagine there are three types of intelligence - social, spacial and scientific, all on a 0-100 scale. For any combinations of the three intelligences - eg (0,42,98) - there is an effort level E (how hard is that intelligence to build, in terms of time, resources, man-hours, etc...) and a power level P (how powerful is that intelligence compared to others, on a single convenient scale of comparison).

Wei Dai's evolutionary comment implies that any being of very low intelligence on one of the scale would be overpowered by a being of more general intelligence. So let's set power as simply the product of all three intelligences.

This seems to imply that general intelligences are more powerful, as it basically bakes in diminishing returns - but we haven't included effort yet. Imagine that the following three intelligences require equal effort: (10,10,10), (20,20,5), (100,5,5). Then the specialised intelligence is definitely the one you need to build.

But is it plausible that those could be of equal difficulty? It could be, if we assume that high social intelligence isn't so difficult, but is specialised. ie you can increase the spacial intelligence of a social intelligence, but that messes up the delicate balance in its social brain. Or maybe recursive self-improvement happens more easily in narrow domains. Further assume that intelligences of different types cannot be easily networked together (eg combining (100,5,5) and (5,100,5) in the same brain gives an overall performance of (21,21,5)). This doesn't seem impossible.

So let's caveat the proposition above: the most effective and dangerous type of AI might be one with a bare minimum amount of general intelligence, but an overwhelming advantage in one type of narrow intelligence.

A thought on AI unemployment and its consequences

5 Stuart_Armstrong 18 August 2014 12:10PM

I haven't given much thought to the concept of automation and computer induced unemployment. Others at the FHI have been looking into it in more details - see Carl Frey's "The Future of Employment", which did estimates for 70 chosen professions as to their degree of automatability, and extended the results of this using O∗NET, an online service developed for the US Department of Labor, which gave the key features of an occupation as a standardised and measurable set of variables.

The reasons that I haven't been looking at it too much is that AI-unemployment has considerably less impact that AI-superintelligence, and thus is a less important use of time. However, if automation does cause mass unemployment, then advocating for AI safety will happen in a very different context to currently. Much will depend on how that mass unemployment problem is dealt with, what lessons are learnt, and the views of whoever is the most powerful in society. Just off the top of my head, I could think of four scenarios on whether risk goes up or down, depending on whether the unemployment problem was satisfactorily "solved" or not:

AI risk\UnemploymentProblem solvedProblem unsolved
Risk reduced
With good practice in dealing
with AI problems, people and
organisations are willing and
able to address the big issues.
The world is very conscious of the
misery that unrestricted AI
research can cause, and very
wary of future disruptions. Those
at the top want to hang on to
their gains, and they are the one
with the most control over AIs
and automation research.
Risk increased
Having dealt with the easier
automation problems in a
particular way (eg taxation),
people underestimate the risk
and expect the same
solutions to work.
Society is locked into a bitter
conflict between those benefiting
from automation and those
losing out, and superintelligence
is seen through the same prism.
Those who profited from
automation are the most
powerful, and decide to push
ahead.

But of course the situation is far more complicated, with many different possible permutations, and no guarantee that the same approach will be used across the planet. And let the division into four boxes not fool us into thinking that any is of comparable probability to the others - more research is (really) needed.

A "Holy Grail" Humor Theory in One Page.

3 EGarrett 18 August 2014 10:26AM

Alrighty, with the mass downvoters gone, I can make the leap to posting some ideas. Here's the Humor Theory I've been developing over the last few months and have discussed at Meet-Ups, and have written two SSRN papers about, in one page. I've taken the document I posted on the Facebook group and retyped and formatted it here.

I strongly suspect that it's the correct solution to this unsolved problem. There was even a new neurology study released in the last few days that confirms one of the predictions I drew from this theory about the evolution of human intelligence.

Note that I tried to fit as much info as I could on the page, but obviously it's not enough space to cover everything, and the other papers are devoted to that. Any constructive questions, discussion etc are welcome.



 

A "Holy Grail" Humor Theory in One Page.


Plato, Aristotle, Kant, Freud, and hundreds of other philosophers have tried to understand humor. No one has ever found a single idea that explains it in all its forms, or shows what's sufficient to create it. Thus, it's been called a "Holy Grail" of social science. Consider this...


In small groups without language, where we evolved, social orders were needed for efficiency. But fighting for leadership would hurt them. So a peaceful, nonverbal method was extremely beneficial. Thus, the "gasp" we make when seeing someone fall evolved into a rapid-fire version at seeing certain failures, which allowed us to signal others to see what happened, and know who not to follow. The reaction, naturally, would feel good and make us smile, to lower our aggression and show no threat. This reaction is called laughter. The instinct that controls it is called humor. It's triggered by the brain weighing things it observes in the proportion:


Humor = ((Qualityexpected - Qualitydisplayed) * Noticeability * Validity) / Anxiety

 

Or H=((Qe-Qd)NV)/A. When the results of this ratio are greater than 0, we find the thing funny and will laugh, in the smallest amounts with slight smiles, small feelings of pleasure or small diaphragm spasms. The numerator terms simply state that something has to be significantly lower in quality than what we assumed, and we must notice it and feel it's real, and the denominator states that anxiety lowers the reaction. This is because laughter is a noisy reflex that threatens someone else's status, so if there is a chance of violence from the person, a danger to threatening a loved one's status, or a predator or other threat from making noise, the reflex will be mitigated. The common feeling amongst those situations, anxiety, has come to cause this.

This may appear to be an ad hoc hypothesis, but unlike those, this can clearly unite and explain everything we've observed about humor, including our cultural sayings and the scientific observations of the previous incomplete theories. Some noticed that it involves surprise, some noticed that it involves things being incorrect, all noticed the pleasure without seeing the reason. This covers all of it, naturally, and with a core concept simple enough to explain to a child. Our sayings, like "it's too soon" for a joke after a tragedy, can all be covered as well ("too soon" indicates that we still have anxiety associated with the event).

The previous confusion about humor came from a few things. For one, there are at least 4 types of laughter: At ourselves, at others we know, at others we don't know (who have an average expectation), and directly at the person with whom we're speaking. We often laugh for one reason instead of the other, like "bad jokes" making us laugh at the teller. In addition, besides physical failure, like slipping, we also have a basic laugh instinct for mental failure, through misplacement. We sense attempts to order things that have gone wrong. Puns and similar references trigger this. Furthermore, we laugh loudest when we notice multiple errors (quality-gaps) at once, like a person dressed foolishly (such as a court jester), exposing errors by others.

We call this the "Status Loss Theory," and we've written two papers on it. The first is 6 pages, offers a chart of old theories and explains this more, with 7 examples. The second is 27 pages and goes through 40 more examples, applying this concept to sayings, comedians, shows, memes, and other comedy types, and even drawing predictions from the theory that have been verified by very recent neurology studies, to hopefully exhaustively demonstrate the idea's explanatory power. If it's not complete, it should still make enough progress to greatly advance humor study. If it is, it should redefine the field. Thanks for your time.

Group Rationality Diary, August 16-31

1 therufs 18 August 2014 02:33AM

This is the public group instrumental rationality diary for August 16-31. 

It's a place to record and chat about it if you have done, or are actively doing, things like: 

  • Established a useful new habit
  • Obtained new evidence that made you change your mind about some belief
  • Decided to behave in a different way in some set of situations
  • Optimized some part of a common routine or cached behavior
  • Consciously changed your emotions or affect with respect to something
  • Consciously pursued new valuable information about something that could make a big difference in your life
  • Learned something new about your beliefs, behavior, or life that surprised you
  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves. Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Thanks to cata for starting the Group Rationality Diary posts, and to commenters for participating.

Previous diary: August 1-15

Rationality diaries archive

[meta] Future moderation and investigation of downvote abuse cases, or, I don't want to deal with this stuff

41 Kaj_Sotala 17 August 2014 02:40PM

Since the episode with Eugine_Nier, I have received three private messages from different people asking me to investigate various cases of suspected mass downvoting. And to be quite honest, I don't want to deal with this. Eugine's case was relatively clear-cut, since he had engaged in systematic downvoting of a massive scale, but the new situations are a lot fuzzier and I'm not sure of what exactly the rules should be (what counts as a permitted use of the downvote system and what doesn't?).

At least one person has also privately contacted me and offered to carry out moderator duties if I don't want them, but even if I told them yes (on what basis? why them and not someone else?), I don't know what kind of policy I should tell them to enforce. I only happened to be appointed a moderator because I was in the list of top 10 posters at a particular time, and I don't feel like I should have any particular authority to make the rules. Nor do I feel like I have any good idea of what the rules should be, or who would be the right person to enforce them.

In any case, I don't want to be doing this job, nor do I particularly feel like being responsible for figuring out who should, or how, or what the heck. I've already started visiting LW less often because I dread having new investigation requests to deal with. So if you folks could be so kind as to figure it out without my involvement? If there's a clear consensus that someone in particular should deal with this, I can give them mod powers, or something.

Thoughts on becoming more organized

-3 Will_BC 17 August 2014 03:46AM

It seems to me that the rationality movement is doing a sub-optimal job at proliferating. I have seen on multiple occasions posts which suggest that LessWrong is in decline. I think that this has a lot to do with organization, and by organization I mean the effectiveness with which a group of people obtains its goals. I believe that rationality has a more populist message, and I would like to see it refined and spread. I have a collection of my thoughts https://drive.google.com/folderview?id=0B9BZfCmYSqm-TTlfRW1hMVJ5VnM&usp=sharing, with the more concise and up to date summary of my suggestions https://docs.google.com/document/d/1I-T-jiuhHr951FUHZ6q-KUW4oK-GHCjF9bVGr2dwR1M/edit?usp=sharing. I have not developed these ideas to the point where I am strongly attached to them. What I would like to do for now is to create three monthly discussion groups.

 

The first is based on instrumental rationality, and I'd like to call it Success Club. For this group I would like to use [Alex Vermeer's 8760 hours guide](http://alexvermeer.com/8760hours/) as a basis. If you want to join this group, I would suggest you be open minded and able to deal with other people's sensitive issues. This will work as a support group, and if you can't keep confidentiality you won't be able to be a member. The second group is based on more general or epistemic rationality. I would use the sequences as a basis, but if any CFAR alums have better suggestions I would welcome them. The third group is a meta group, discussing the movement as a whole and how to make it more effective. I would like to start by discussing the ideas in the google drive folder that I shared and move from there.

 

If anyone is interested in any of these groups, please send me a PM with a little about yourself and which group(s) you'd be interested in joining. 

 

Edit: Could someone explain why I've been downvoted? Judging by the way the karma is proportioned I'm getting a good number of positive and negative reactions but not a whole lot in the way of feedback.

Astray with the Truth: Logic and Math

2 StephenR 16 August 2014 03:40PM

LessWrong has one of the strongest and most compelling presentations of a correspondence theory of truth on the internet, but as I said in A Pragmatic Epistemology,  it has some deficiencies. This post delves into one example: its treatment of math and logic. First, though, I'll summarise the epistemology of the sequences (especially as presented in High Advanced Epistemology 101 for Beginners). 

Truth is the correspondence between beliefs and reality, between the map and the territory.[1] Reality is a causal fabric, a collection of variables ("stuff") that interact with each other.[2] True beliefs mirror reality in some way. If I believe that most maps skew the relative size of Ellesmere Island, it's true when I compare accurate measurements of Ellesmere Island to accurate measurements of other places, and find that the differences aren't preserved in the scaling of most maps. That is an example of a truth-condition, which is a reality that the belief can correspond to. My belief about world maps is true when that scaling doesn't match up in reality. All meaningful beliefs have truth-conditions; they trace out paths in a causal fabric.[3] Another way to define truth, then, is that a belief is true when it traces a path which is found in the causal fabric the believer inhabits.

Beliefs come in many forms. You can have beliefs about your experiences past, present and future; about what you ought to do;  and, relevant to our purposes, about abstractions like mathematical objects. Mathematical statements are true when they are truth-preserving, or valid. They're also conditional: they're about all possible causal fabrics rather than any one in particular.[4] That is, when you take a true mathematical statement and plug in any acceptable inputs,[5] you will end up with a true conditional statement about the inputs. Let's illustrate this with the disjunctive syllogism:

((A∨B) ∧ ¬A) ⇒ B

Letting A be "All penguins ski in December" and B be "Martians have been decimated," this reads "If all penguins ski in December or Martians have been decimated, and some penguins don't ski in December, then Martians have been decimated." And if the hypothesis obtains (if it's true that (A∨B) ∧ ¬A), then the conclusion (B) is claimed to follow.[6] 

That's it for review, now for the substance.

Summary. First, from examining the truth-conditions of beliefs about validity, we see that our sense of what is obvious plays a suspicious role in which statements we consider valid. Second, a major failure mode in following obviousness is that we sacrifice other goals by separating the pursuit of truth from other pursuits. This elevation of the truth via the epistemic/instrumental rationality distinction prevents us from seeing it as one instrumental goal among many which may sometimes be irrelevant.


What are the truth-conditions of a belief that a certain logical form is valid or not? 

A property of valid statements is being able to plug any proposition you like into the propositional variables of the statement without disturbing the outcome (the conditional statement will still be true). Literally any proposition; valid forms about everything that can be articulated by means of propositions. So part of the truth-conditions of a belief about validity is that if a sentence is valid, everything is a model of it. In that case, causal fabrics, which we investigate by means of propositions,[7] can't help but be constrained by what is logically valid. We would never expect to see some universe where inputting propositions into the disjunctive syllogism can output false without being in error. Call this the logical law view. This suggests that we could check a bunch of inputs and universes constructions until we feel satisfied that the sentence will not fail to output true.

It happens that sentences which people agree are valid are usually sentences that people agree are obviously true. There is something about the structure of our thought that makes us very willing to accept their validity. Perhaps you might say that because reality is constrained by valid sentences, sapient chunks of reality are going to be predisposed to recognising validity ...

But what separates that hypothesis from this alternative: "valid sentences are rules that have been applied successfully in many cases so far"? That is, after all, the very process that we use to check the truth-conditions of our beliefs about validity. We consider hypothetical universes and we apply the rules in reasoning. Why should we go further and claim that all possible realities are constrained by these rules? In the end we are very dependent on our intuitions about what is obvious, which might just as well be due to flaws in our thought as logical laws. And our insistence of correctness is no excuse. In that regard we may be no different than certain ants that mistake living members of the colony for dead when their body is covered in a certain pheromone:[8] prone to a reaction that is just as obviously astray to other minds as it is obviously right to us.  

In light of that, I see no reason to be confident that we can distinguish between success in our limited applications and necessary constraint on all possible causal fabrics. 

And despite what I said about "success so far," there are clear cases where sticking to our strong intuition to take the logical law view leads us astray on goals apart from truth-seeking. I give two examples where obsessive focus on truth-seeking consumes valuable resources that could be used toward a host of other worthy goals. 

The Law of Non-Contradiction. The is law is probably the most obvious thing in the world. A proposition can't be truth and false, or ¬(P ∧ ¬P). If it were both, then you would have a model of any proposition you could dream of. This is an extremely scary prospect if you hold the logical law view; it means that if you have a true contradiction, reality doesn't have to make sense.  Causality and your expectations are meaningless. That is the principle of explosion(P ∧ ¬P) ⇒ Q, for arbitrary Q. Suppose that pink is my favourite colour, and that it isn't. Then pink is my favourite colour or causality is meaningless. Except pink isn't my favourite colour, so causality is meaningless. Except it is, because either pink is my favourite colour or causality is meaningful, but pink isn't. Therefore pixies by a similar argument. 

Is (P ∧ ¬P) ⇒ Q valid? Most people think it is. If you hypnotised me into forgetting that I find that sort of question suspect, I would agree. I can *feel* the pull toward assenting its validity.  If ¬(P ∧ ¬P) is true it would be hard to say why not. But there are nonetheless very good reasons for ditching the law of non-contradiction and the principle of explosion. Despite its intuitive truth and general obviousness, it's extremely inconvenient. Solving the problem of the consistency of various PA and ZFC, which are central to mathematics, has proved very difficult. But of course part of the motivation is that if there were an inconsistency, the principle of explosion would render the entire system useless. This undesirable effect has led some to develop paraconsistent logics which do not explode with the discovery of a contradiction. 

Setting aside whether the law of non-contradiction is really truly true and the principle of explosion really truly valid, wouldn't we be better off with foundational systems that don't buckle over and die at the merest whiff of a contradiction? In any case, it would be nice to alter the debate so that the truth of these statements didn't eclipse their utility toward other goals.

The Law of Excluded MiddleP∨¬P: if a proposition isn't true, then it's false; if it isn't false, then it's true. In terms of the LessWrong epistemology, this means that a proposition either obtains in the causal fabric you're embedded in, or it doesn't. Like the previous example this has a strong intuitive pull. If that pull is correct, all sentences Q ⇒ (P∨¬P) must be valid since everything models true sentences. And yet, though doubting it can seem ridiculous, and though I would not doubt it on its own terms[9], there are very good reasons for using systems where it doesn't hold.

The use of the law of excluded middle in proofs severely inhibits the construction of programmes based on proofs. The barrier is that the law is used in existence proofs, which show that some mathematical object must exist but give no method of constructing it.[10] 

Removing the law, on the other hand, gives us intuitionistic logic. Via a mapping called the Curry-Howard isomorphism all proofs in intuitionistic logic are translatable into programmes in the lambda calculus, and vice versa. The lambda calculus itself, assuming the Church-Turing thesis, gives us all effectively computable functions. This creates a deep connection between proof theory in constructive mathematics and computability theory, facilitating automatic theorem proving and proof verification and rendering everything we do more computationally tractable.

Even if we the above weren't tempting and we decided not to restrict ourselves to constructive proofs, we would be stuck with  intuitionistic logic. Just as classical logic is associated with Boolean algebras, intuitionistic logic is associated with Heyting algebras. And it happens that the open set lattice of a topological space is a complete Heyting algebra even in classical topology.[11] This is closely related to topos theory; the internal logic of a topos is at least[12] intuitionistic. As I understand it, many topoi can be considered as foundations for mathematics,[13] and so again we see a classical theory pointing at constructivism suggestively. The moral of the story: in classical mathematics where the law of excluded middle holds, objects in which it fails arise naturally.

Work in the foundations of mathematics suggests that constructive mathematics is at least worth looking into, setting aside whether the law of excluded middle is too obvious to doubt. Letting its truth hold us back from investigating the merits of living without it cripples the capabilities of our mathematical projects. 


Unfortunately, not all constructivists or dialetheists (as proponents of paraconsistent logic are called) would agree how I framed the situation. I have blamed the tendency to stick to discussions of truth for our inability to move forward in both cases, but they might blame the inability of their opponents to see that the laws in question are false. They might urge that if we take the success of these laws as evidence of their truth, then failures or shortcomings should be evidence against them and we should simply revise our views accordingly. 

That is how the problem looks when we wear our epistemic rationality cap and focus on the truth of sentences: we consider which experiences could tip us off about which rules govern causal fabrics, and we organise our beliefs about causal fabrics around them. 

This framing of the problem is counterproductive. So long as we are discussing these abstract principles under the constraints of our own minds,[14] I will find any discussion of their truth or falsity highly suspect for the reasons highlighted above. And beyond that, the psychological pull toward the respective positions is too forceful for this mode of debate to make progress on reasonable timescales. In the interests of actually achieving some of our goals I favour dropping that debate entirely.

Instead, we should put on our instrumental rationality cap and consider whether these concepts are working for us. We should think hard about what we want to achieve with our mathematical systems and tailor them to perform better in that regard. We should recognise when a path is moot and trace a different one.

When we wear our instrumental rationality cap, mathematical systems are not attempts at creating images of reality that we can use for other things if we like. They are tools that we use to achieve potentially any goal, and potentially none. If after careful consideration we decide that creating images of reality is a fruitful goal relative to the other goals we can think of for our systems, fine. But that should by no means be the default, and if it weren't mathematics would be headed elsewhere. 


ADDENDUM

[Added due to expressions of confusion in the comments. I have also altered the original conclusion above.]

I gave two broad weaknesses in the LessWrong epistemology with respect to math.

The first concerned its ontological commitments. Thinking of validity as a property of logical laws constraining causal fabrics is indistinguishable in practical purposes from thinking of validity as a property of sentences relative to some axioms or according to strong intuition. Since our formulation and use of these sentences have been in familiar conditions, and since it is very difficult (perhaps impossible) to determine whether their psychological weight is a bias, inferring any of them as logical laws above and beyond their usefulness as tools is spurious. 

The second concerned cases where the logical law view can hold us back from achieving goals other than discovering true things.  The law of non-contradiction and the law of excluded middle are as old as they are obvious, yet they prevent us from strengthening our mathematical systems and making their use considerably easier. 

One diagnosis of this problem might be that sometimes it's best to set our epistemology aside in the interests of practical pursuits, that sometimes our epistemology isn't relevant to our goals. Under this diagnosis, we can take the LessWrong epistemology literally and believe it is true, but temporarily ignore it in order to solve certain problems. This is a step forward, but I would make a stronger diagnosis: we should have a background epistemology guided by instrumental reason, in which the epistemology of LessWrong and epistemic reason are tools that we can use if we find them convenient, but which we are not committed to taking literally.

I prescribe an epistemology that a) sees theories as no different from hammers, b) doesn't take the content of theories literally, and c) lets instrumental reason guide the decision of which theory to adopt when. I claim that this is the best framework to use for achieving our goals, and I call this a pragmatic epistemology.  

---

[1] See The Useful Idea of Truth.

[2] See The Fabric of Real Things and Stuff that Makes Stuff Happen.

[3] See The Useful Idea of Truth and The Fabric of Real Things. 

[4] See Proofs, Implications, and Models and Logical Pinpointing.

[5] Acceptable inputs being given by the universe of discourse (also known as the universe or the domain of discourse), which is discussed on any text covering the semantics of classical logic, or classical model theory in general.

[6] A visual example using modus ponens and cute cuddly kittens is found in Proofs, Implications, and Models.

[7] See The Useful Idea of Truth.

[8] See this paper by biologist E O Wilson.

[9] What I mean is that I would not claim that it "isn't true," which usually makes the debate stagnate. 

[10] For concreteness, read these examples of non-constructive proofs. 

[11] See here, paragraph two. 

[12] Given certain further restrictions, a topos is Boolean and its internal logic is classical. 

[13] This is an amusing and vague-as-advertised summary by John Baez.

[14] Communication with very different agents might be a way to circumvent this. Receiving advice from an AI, for instance. Still, I have reasons to find this fishy as well, which I will explore in later posts. 

Three methods of attaining change

6 Stefan_Schubert 16 August 2014 03:38PM

Say that you want to change some social or political institution: the educational system, the monetary system, research on AGI safety, or what not. When trying to reach this goal, you may use one of the following broad strategies (or some combination of them):

1) You may directly try to lobby (i.e. influence) politicians to implement this change, or try to influence voters to vote for parties that promise to implement these changes. 

2) You may try to build an alternative system and hope that it eventually becomes so popular so that it replaces the existing system.

3) You may try to develop tools that a) appeal to users of existing systems and b) whose widespread use is bound to change those existing systems.

Let me give some examples of what I mean. Trying to persuade politicians that we should replace conventional currencies by a private currency or, for that matter, starting a pro-Bitcoin party, fall under 1), whereas starting a private currency and hope that it spreads falls under 2). (This post was inspired by a great comment by Gunnar Zarncke on precisely this topic. I take it that he was there talking of strategy 2.) Similarly, trying to lobby politicians to reform the academia falls under 1) whereas starting new research institutions which use new and hopefully more effective methods falls under 2). I take it that this is what, e.g. Leverage Research is trying to do, in part. Similarly, libertarians who vote for Ron Paul are taking the first course, while at least one possible motivation for the Seasteading Institute is to construct an alternative system that proves to be more efficient than existing governments.

Efficient Voting Advice Applications (VAA's), which advice you to vote on the basis of your views on different policy matters, can be an example of 3) (they are discussed here). Suppose that voters started to use them on a grand scale. This could potentially force politicians to adhere very closely to the views of the voters on each particular issue, since if you failed to do this you would stand little chance of winning. This may or may not be a good thing, but the point is that it would be a change that would not be caused by lobbying of politicians or by building an alternative system, but simply by constructing a tool whose widespread use could change the existing system.

Another similar tool is reputation or user review systems. Suppose that you're dissatisfied with the general standards of some institution: say university education, medical care, or what not. You may attain this by lobbying politicians to implement new regulations intended to ensure quality (1), or by starting your own, superior, universities or hospitals (2), hoping that others will follow. Another method is, however, to create a reliable reputation/review system which, if they became widely used, would guide students and patients to the best universities and hospitals, thereby incentivizing to improve.

Now of course, when you're trying to get people to use such review systems, you are, in effect, building an evaluation system that competes with existing systems (e.g. the Guardian university ranking), so on one level you are using the second strategy. Your ultimate goal is, however, to create better universities, to which better evaluation systems, is just a means (as a tool). Hence you're following the third strategy here, in my terms.

Strategy 1) is of course a "statist" one, since what you're doing here is that you're trying to get the government to change the institution in question for you. Strategies 2) and 3) are, in contrast, both "non-statist", since when you use them you're not directly trying to implement the change through the political system. Hence libertarians and other anti-statists should prefer them.

My hunch is that when people are trying to change things, many of them unthinkingly go for 1), even regarding issues where it is unlikely that they are going to succeed that way. (For instance, it seems to me that advocates for direct democracy who try to persuade voters to vote for direct democratic parties are unlikely to succeed, but that widespread of VAA's might get us considerably closer to their ideal, and that they therefore should opt for the third strategy.) A plausible explanation of this is availability bias; our tendency to focus on what we most often see around us. Attempts to change social institutions through politics get a lot of attention, which makes people think of this strategy first. Even though this strategy is often efficient, I'd guess it is, for this reason, generally overused and that people sometimes instead should go for 2) or 3). (Possibly, Europeans have an even stronger bias in favour of this strategy than Americans.)

I also suspect, though, that people go for 2) a bit too often relative to 3). I think that people find it appealing, for its own sake, to create an entirely alternatively structure. If you're a perfectionist, it might be satisfying to see what you consider "the perfect institution", even if it is very small and has little impact on society. Also, sometimes small groups of devotees flock to these alternatives, and a strong group identity is therefore created. Moreover, I think that availability bias may play a role here, also. Even though this sort of strategy gets less attention than lobbying, most people know what it is. It is quite clear what it means to do something like this, and being part of a project like this therefore gives you a clear identity. For these reasons, I think that we might sometimes fool ourselves into believing that these alternative structures are more likely to be succesful than they actually are.

Conversely, people might be biased against the third strategy because it's less obvious. Also, it has perhaps something vaguely manipulative over it which might bias idealistic people against it. What you're typically trying to do is to get people to use a tool (say VAA's) a side-effect of which is the change you wish to attain (in this case, correspondence between voters' views and actual policies). I don't think that this kind of manipulation is necessarily vicious (but it would need to be discussed on a case-by-case-basis) but the point is that people tend to think that it is. Also, even those who don't think that it is manipulative in an unethical sense would still think that it is somehow "unheroic". Starting your own environmental party or creating your own artifical libertarian island clearly has something heroic over it, but developing efficient VAA's, which as a side-effect changes the political landscape, does not.

I'd thus argue that people should start looking more closely at the third strategy. A group that does use a strategy similar to this is of course for-profit companies. They try to analyze what products would appeal to people, and in so doing, carefully consider how existing institutions shape people's preferences. For instance, companies like Uber, AirBnB and LinkedIn have been succesful because they realized that given the structure of the taxi, the hotel and the recruitment businesses, their products would be appealing.

Of course, these companies primary goal, profit, is very different from the political goals I'm talking about here. At the same time, I think it is useful to compare the two cases. I think that generally, when we're trying to attain political change, we're not "actually trying" (in CFAR's terminology) as hard as we do when we're trying to maximize profit . It is very easy to fall into a mode where you're focusing on making symbolic gestures (which express your identity) rather than on trying to change things in politics. (This is, in effect, what many traditional charities are doing, if the EA movement is right.)

Instead, we should think as hard as profit-maximizing companies what new tools are likely to catch on. Any kind of tools could in principle be used, but the ones that seem most obvious are various kind of social media and other internet based tools (such as those mentioned in this post). The technical progress gives us enormous opportunities to costruct new tools that could re-shape people's behaviour in a way that would impact existing social and political institutions on a large scale.

Developing such tools is not easy. Even very succesful companies again and again fail to predict what new products will appeal to people. Not the least, you need a profound understanding of human psychology in order to succeed. That said, political organizations have certain advantages visavi for-profit companies. More often than not, they might develop ideas publically, whereas for-profit companies often have to keep them secret until they product is launched. This facilitates wisdom of the crowd-reasoning, where many different kinds of people come up with solutions together. Such methods can, in my opinion, be very powerful.

 

Any input regarding, e.g. the taxonomy of methods, my speculations about biases, and, in particular, examples of institution changing tools are welcome. I'm also interested in comments on efficient methods for coming up with useful tools (e.g. tests of them). Finally, if anything's unclear I'd be happy to provide clarifications (it's a very complex topic).

FAI PR tracking well [link]

7 Dr_Manhattan 15 August 2014 09:23PM

This time, it's by "The Editors" of Bloomberg view (which is very significant in News world). Content is very reasonable explanation of AI concerns, though not novel to this audience.

http://www.bloombergview.com/articles/2014-08-10/intelligent-machines-scare-smart-people

Directionally this is definitely positive, though I'm not sure quite how. Does anyone have have ideas? Perhaps one of the orgs (MIRI, FHI, CSER, FLI) reach out and say hello to the editors? 

Weekly LW Meetups

1 FrankAdamek 15 August 2014 08:21PM

[LINK] Speed superintelligence?

32 Stuart_Armstrong 14 August 2014 03:57PM

From Toby Ord:

Tool assisted speedruns (TAS) are when people take a game and play it frame by frame, effectively providing super reflexes and forethought, where they can spend a day deciding what to do in the next 1/60th of a second if they wish. There are some very extreme examples of this, showing what can be done if you really play a game perfectly. For example, this video shows how to winSuper Mario Bros 3 in 11 minutes. It shows how different optimal play can be from normal play. In particular, on level 8-1, it gains 90 extra lives by a sequence of amazing jumps.

Other TAS runs get more involved and start exploiting subtle glitches in the game. For example, this page talks about speed running NetHack, using a lot of normal tricks, as well as luck manipulation (exploiting the RNG) and exploiting a dangling pointer bug to rewrite parts of memory.

Though there are limits to what AIs could do with sheer speed, it's interesting that great performance can be achieved with speed alone, that this allows different strategies from usual ones, and that it allows the exploitation of otherwise unexploitable glitches and bugs in the setup.

Public thread for researchers seeking existential risk consultation

0 snarles 14 August 2014 01:01PM

LW is one of the few informal places which take existential risk seriously.  Researchers can post here to describe proposed or ongoing research projects, seeking consultation on possible X-risk consequences of their work.  Commenters should write their posts with the understanding that many researchers prioritize interest first and existential risk/social benefit of their work second, but that discussions of X-risk may steer researchers to projects with less X-risk/more social benefit.

[LINK] AI risk summary published in "The Conversation"

6 Stuart_Armstrong 14 August 2014 11:12AM

A slightly edited version of "AI risk - executive summary" has been published in "The Conversation", titled "Your essential guide to the rise of the intelligent machines":

The risks posed to human beings by artificial intelligence in no way resemble the popular image of the Terminator. That fictional mechanical monster is distinguished by many features – strength, armour, implacability, indestructability – but Arnie’s character lacks the one characteristic that we in the real world actually need to worry about – extreme intelligence.

Thanks again for those who helped forge the original article. You can use this link, or the Less Wrong one, depending on the audience.

Ethical frameworks are isomorphic

5 lavalamp 13 August 2014 10:39PM

I have previously been saying things like "consequentialism is obviously correct". But it occurred to me that this was gibberish this morning.

I maintain that, for any consequentialist goal, you can construct a set of deontological rules which will achieve approximately the same outcome. The more fidelity you require, the more rules you'll have to make (so of course it's only isomorphic in the limit).

Similarly, for any given deontological system, one can construct a set of virtues which will cause the same behavior (e.g., "don't murder" becomes "it is virtuous to be the sort of person who doesn't murder")

The opposite is also true. Given a virtue ethics system, one can construct deontological rules which will cause the same things to happen. And given deontological rules, it's easy to get a consequentialist system by predicting what the rules will cause to happen and then calling that your desired outcome.

Given that you can phrase your desired (outcome, virtues, rules) in any system, it's really silly to argue about which system is the "correct" one.

Instead, recognize that some ethical systems are better for some tasks. Want to compute actions given limited computation? Better use deontological rules or maybe virtue ethics. Want to plan a society that makes everyone "happy" for some value of "happy"? Better use consequentialist reasoning.

Last thought: none of the three frameworks actually gives any insight into morality. Deontology leaves the question of "what rules?", virtue ethics leaves the question of "what virtues?", and consequentialism leaves the question of "what outcome?". The hard part of ethics is answering those questions.

(ducks before accusations of misusing "isomorphic")

What is the difference between rationality and intelligence?

10 Wei_Dai 13 August 2014 11:19AM

Or to ask the question another way, is there such a thing as a theory of bounded rationality, and if so, is it the same thing as a theory of general intelligence?

The LW Wiki defines general intelligence as "ability to efficiently achieve goals in a wide range of domains", while instrumental rationality is defined as "the art of choosing and implementing actions that steer the future toward outcomes ranked higher in one's preferences". These definitions seem to suggest that rationality and intelligence are fundamentally the same concept.

However, rationality and AI have separate research communities. This seems to be mainly for historical reasons, because people studying rationality started with theories of unbounded rationality (i.e., with logical omniscience or access to unlimited computing resources), whereas AI researchers started off trying to achieve modest goals in narrow domains with very limited computing resources. However rationality researchers are trying to find theories of bounded rationality, while people working on AI are trying to achieve more general goals with access to greater amounts of computing power, so the distinction may disappear if the two sides end up meeting in the middle.

We also distinguish between rationality and intelligence when talking about humans. I understand the former as the ability of someone to overcome various biases, which seems to consist of a set of skills that can be learned, while the latter is a kind of mental firepower measured by IQ tests. This seems to suggest another possibility. Maybe (as Robin Hanson recently argued on his blog) there is no such thing as a simple theory of how to optimally achieve arbitrary goals using limited computing power. In this view, general intelligence requires cooperation between many specialized modules containing domain specific knowledge, so "rationality" would just be one module amongst many, which tries to find and correct systematic deviations from ideal (unbounded) rationality caused by the other modules.

I was more confused when I started writing this post, but now I seem to have largely answered my own question (modulo the uncertainty about the nature of intelligence mentioned above). However I'm still interested to know how others would answer it. Do we have the same understanding of what "rationality" and "intelligence" mean, and know what distinction someone is trying to draw when they use one of these words instead of the other?

ETA: To clarify, I'm asking about the difference between general intelligence and rationality as theoretical concepts that apply to all agents. Human rationality vs intelligence may give us a clue to that answer, but isn't the main thing that I'm interested here.

[LINK] 2014 Fields Medals and Nevanlinna Prize anounced

1 Sarunas 13 August 2014 10:58AM

http://www.mathunion.org/general/prizes/2014

On August 13, 2014, at the opening ceremony of the [International Congress of Mathematicians](http://www.icm2014.org)) the Fields Medals, the Nevanlinna Prize and several other prizes were announced.
A full list of awardees with short citations:

Fields medals:
Artur Avila

is awarded a Fields Medal for his profound contributions to dynamical systems theory, which have changed the face of the field, using the powerful idea of renormalization as a unifying principle.

Quanta Magazine on Artur Avila

Manjul Bhargava

is awarded a Fields Medal for developing powerful new methods in the geometry of numbers, which he applied to count rings of small rank and to bound the average rank of elliptic curves.

Quanta Magazine on Manjul Bhargava

Martin Hairer

is awarded a Fields Medal for his outstanding contributions to the theory of stochastic partial differential equations, and in particular for the creation of a theory of regularity structures for such equations.

Quanta Magazine on Martin Hairer

Maryam  Mirzakhani

is awarded the Fields Medal for her outstanding contributions to the dynamics and geometry of Riemann surfaces and their moduli spaces.

Quanta Magazine on Maryam  Mirzakhani

Nevalinna prize:
Subhash Khot

is awarded the Nevanlinna Prize for his prescient definition of the “Unique Games” problem, and leading the effort to understand its complexity and its pivotal role in the study of efficient approximation of optimization problems; his work has led to breakthroughs in algorithmic design and approximation hardness, and to new exciting interactions between computational complexity, analysis and geometry.

Quanta Magazine on Subhash Khot

Gauss Prize:
Stanley Osher

is awarded the Gauss Prize for his influential contributions to several fields in applied mathematics, and for his far-ranging inventions that have changed our conception of physical, perceptual, and mathematical concepts, giving us new tools to apprehend the world.

Chern Medal Award:
Phillip Griffiths

is awarded the 2014 Chern Medal for his groundbreaking and transformative development of transcendental methods in complex geometry, particularly his seminal work in Hodge theory and periods of algebraic varieties.

Leelavati Prize:
Adrián Paenza

is awarded the Leelavati Prize for his decisive contributions to changing the mind of a whole country about the way it perceives mathematics in daily life, and in particular for his books, his TV programs, and his unique gift of enthusiasm and passion in communicating the beauty and joy of mathematics.

In addition to that, Georgia Benkart was announced as the  2014 ICM Emmy Noether lecturer.
It might be interesting to note a curious fact about the new group of Fields medalists:

each of them [is] a notable first for the Fields Medal: the first woman and the first Iranian, Maryam Mirzakhani; the first Canadian, Manjul Bhargava; Artur Avila, the first Brazilian; and Martin Hairer, the first Austrian to win a Fields Medal.

However, this unusual diversity of nationalities does not necessarily translate into a corresponding diversity of institutions, since (according to wikipedia) three out of four winners work in (or at least are affiliated with) universities that have already had awardees in the past.

Some notes on the works by Fields medalists can be found on Terence Tao's blog.

A related discussion on Hacker News.

Truth vs Utility

1 Qwake 13 August 2014 05:45AM

According to Eliezer, there are two types of rationality. There is epistemic rationality, the process of updating your beliefs based on evidence to correspond to the truth (or reality) as closely as possible. And there is instrumental rationality, the process of making choices in order to maximize your future utility yield. These two slightly conflicting definitions work together most of the time as obtaining the truth is the rationalists' ultimate goal and thus yields the maximum utility. Are there ever times when the truth is not in a rationalist's best interest? Are there scenarios in which a rationalist should actively try to avoid the truth to maximize their possible utility? I have been mentally struggling with these questions for a while. Let me propose a scenario to illustrate the conundrum.

 

Suppose Omega, a supercomputer, comes down to Earth to offer you a choice. Option 1 is to live in a stimulated world where you have infinite utility (on this world there is no, pain, suffering, death, its basically a perfect world) and you are unaware you are living in a stimulation. Option 2 is Omega will answer one question on absolutely any subject truthfully pertaining to our universe with no strings attached. You can ask about the laws governing the universe, the meaning of life, the origin of time and space, whatever and Omega will give you a absolutely truthful, knowledgeable answer. Now, assuming all of these hypotheticals are true, which option would you pick? Which option should a perfect rationalist pick? Does the potential of asking a question whose answer could greatly improve humanity's knowledge of our universe outweigh the benefits of living in a perfect simulated world with unlimited utility? There is probably a lot of people who would object outright to living in a simulation because it's not reality or the truth. Well lets consider the simulation in my hypothetical conundrum for a second. It's a perfect reality and has unlimited utility potential, and you are completely unaware you are in a simulation on this world. Aside from the unlimited utility part, that sounds a lot like our reality. There are no signs of our reality of being a simulation and all (most) of humanity is convinced that our reality is not a simulation. There for, the only difference that really matters between the simulation in Option 1 and our reality is the unlimited utility potential that Option 1 offers. If there is no evidence that a simulation is not reality then the simulation is reality for the people inside the simulation. That is what I believe and that is why I would choose Option 1. The infinite utility of living in a perfect reality outweighs almost any utility amount increase I could contribute to humanity.

I am very interested in which option the less wrong community would choose (I know Option 2 is kind of arbitrary I just needed an option for people who wouldn't want to live in a simulation). As this is my first post, any feedback or criticism is appreciated. Also many more information on the topic of truth vs utility would be very helpful. Feel free to down vote me to oblivion if this post was stupid, didn't make sense, etc. It was simply an idea that I found interesting that I wanted to put into writing. Thank you for reading.

If interventions changing population size are cheap, they may be the best option independent of your population ethics

5 ericyu3 13 August 2014 03:03AM

In this post I'll explain why you might want to assist altruistic interventions that change the size of the world population regardless of how valuable you think additional lives are. The argument relies on a combination of 2 population-changing interventions that combine to produce the effect of a non-population-changing intervention, but at a lower cost.

Suppose you can donate to the following 3 interventions:

  • "Growth": increase one future person's income from $500/yr to $5,000/yr for $10,000
  • "Plus": cause one more person to be born in a middle-income country (income ~$5,000/yr) for $6,000
  • "Minus": cause one less person to be born in a poor country (income ~$500/yr) for $1,000
Assume that the interventions are independent, and that donating multiples of the cost produces multiples of the effect without diminishing returns.

The cost estimates are completely made up; the point of this post is to explain what happens if the total cost of Plus and Minus is less than the cost of Growth. The cost of Plus is probably least well-known, since it's the least popular of the 3. Also, in the real world, you would probably want to spread the impact of $10,000 across at least several people instead of increasing one person's income by 10x, but I think the post makes more sense this way. If you know a more reasonable estimate for the costs, please post them!

If you donate to Plus and Minus, the total effect is the same as the effect of Growth in many ways - in the future, there is one more person with income $5,000, one less person with income $500, and the size of the world population remains the same. In my last post, I asked about whether consequentialists actually view the two outcomes as equivalent, and people seemed to think yes, so it's reasonable to say that Plus+Minus is just as beneficial as Growth. But Plus+Minus only costs $7,000 while Growth costs $10,000, so regardless of your population ethics, you should prefer donating to Plus+Minus.

But unless your population ethics are "fine-tuned" to make Plus and Minus equally cost-effective, one of them will be clearly better (more cost-effective) than the other. If you think Minus is better than Plus, then Minus is better than Plus+Minus, which is better than Growth, so you should donate exclusively to Minus. The same argument applies if you think Plus is better than Minus. If you donate to only one of Plus and Minus, you will change the size of the world population. So this seems to show that if population-changing interventions are cheap, you should act to change population size regardless of what you think about population ethics. Even if you are very uncertain what the value of a new life is, you can still use your best guess to decide between Plus and Minus as long as you are risk-neutral about how much good you do. 

Numerical example: suppose that Growth yields 100 "points" of benefit, where "point" is an arbitrary unit. Then regardless of population ethics, Plus+Minus yields 100 points as well. How these points are distributed between Plus and Minus depends on your population ethics, however. If you are a total utilitarian, you might say that Minus is worth -20 points and Plus is worth 120 points, and if you're a negative utilitarian, you might say that Minus is worth 150 points and Plus -50 points. If you're an average utilitarian, you might say that Minus is worth 70 and Plus is worth 30. But these all sum up to 100, and they would all choose Plus or Minus over Growth: Plus for the total utilitarian and Minus for the others.

What might be wrong with this reasoning? I can think of a few things:
  1. Plus+Minus is more costly than Growth in reality (quite likely)
  2. Growth and Plus+Minus are actually not equivalent, since Growth actually helps a particular person (again, see my last post)
I'm really curious about what the costs of economic-growth and population interventions are. I'd guess that population interventions would be competitive with unconditional cash transfer programs like GiveDirectly, but I don't know that much about their effectiveness, and I don't know whether there are economic interventions that are more cost-effective than cash transfers. Here are some population interventions that can be done or funded by individuals:
  • Education about contraception
  • Having children yourself (cost varies from person to person)
  • Paying others to have children
  • Subsidizing contraception
  • Subsidizing surrogacy (there are replaceability issues here, but I couldn't find any estimates of supply/demand elasticity)
  • Being a surrogate yourself (doesn't cost you any money, but can be unpleasant, so the cost varies from person to person)
Have people made estimates of how cost-effective these are? The Plus+Minus vs. Growth hypothetical doesn't work if Growth is actually cheaper, so I want to know if I'm thinking too much about something irrelevant!

 

Questions on the human path and transhumanism.

-1 HopefullyCreative 12 August 2014 08:34PM

I had a waking nightmare: I know some of you reading this just went "Oh great, here we go..." but bear with me. I am a man who loves to create and build, it is what I have dedicated my life to. One day because of the Less Wrong community I was prompted to ask "What if they are successful in creating an artificial general intelligence whose intellect dwarfs our own?"

My mind raced and imagined the creation of an artificial mind designed to be creative, subservient to man but also anticipate our needs and desires. In other words I imagined if current AGI engineers accomplished the creation of the greatest thing ever. Of course this machine would see how we loathe tiresome repetitive work and design and build for us a host of machines to do it for us. However then the horror at the implication of this all set in. The AGI will become smarter and smarter through its own engineering and soon it will anticipate human needs and produce things no human being could dream of. Suddenly man has no work to do, there is no back breaking labor to be done nor even the creative glorious work of engineering, exploring and experimentation. Instead our army of AGI has robbed us of that. 

At this moment I certainly must express that this is not a statement amounting to "Lets not make AGI" for we all know AGI are coming. Then what is my point in expressing this? To express a train of thought that results in questions that have yet to be answered in the hopes that in depth discussion may shed some light.

I realized that the only meaning for man in a world run by AGI would actually be to order the AGI to make man himself better. Instead of focusing on having the AGI design a world for us, use that intellect that we could not before modification compare with to design a means to put us on its own level. In other words, the goal of creating an AGI should not to be to create an AGI but to make a tool so powerful we can use it to command man to be better. Now, I'm quite certain the audience present here is well aware of transhumanism. However, there are some important questions to be answered on the subject:

Mechanical or Biological modification? I know many would think "Are you stupid?! Of course cybernetics would be better than genetic alteration!" Yet the balance of advantages is not as clear as one would think. Lets consider cybernetics for a moment: Many would require maintenance, they would need to be designed and manufactured and therefore quite expensive. They also would need to be installed. Initially, possibly for decades only the rich could afford such a thing creating a titanic rift in power. This power gap of course will widen the already substantial resentment between the regular folk and the rich thereby creating political and social uncertainty which we can ill afford in a world with the kind of destructive power nuclear arms present. 

Genetic alteration comes with a whole new set of problems. A titanic realm of genetic variables in which tweaking one thing may unexpectedly alter and damage another thing. Research in this area could potentially take much longer due to experimentation requirements. However the advantage is that genetic alteration can be accomplished with the help of virus in controlled environments. There would be no mechanic required to maintain the new being we have created and if designed properly the modifications can be passed down to the next generation. So instead of having to pay to upgrade each successive generation we instead only have to pay to upgrade one single generation. The rich obviously would still be the first ones to afford this procedure, however it could quickly spread across the globe due its potentially lower cost nature once development costs have been seen to. However, the problem is that we would be fundamentally and possibly irreversibly be altering our genetic code. Its possible to keep a gene bank so we have a memory of what we were in the hopes we could undo the changes and revert if the worst happened yet that is not the greatest problem with this path. We cannot even get the public to accept the concept of genetically altered crops, how can we get a world to accept its genes being altered? The sort of instability created by trying to push such a thing too hard, or the power gap created by those who have upgraded and who have not can again cause substantial instability that is globally dangerous.

So now I ask you, the audience. Genetic or cybernetic? How would we solve the political problems associated with both? What are the problems with both? 

Distinction between "creating/preventing future lives" and "improving future lives that are already expected to exist"?

5 ericyu3 12 August 2014 06:29AM

I'm writing something (mostly for myself right now) about how if you're somewhat of a utilitarian, a very wide range of population ethics principles (total utilitarianism, average utilitarianism, and critical-level utilitarianism with any critical level) will lead to the population size of some countries being strongly non-neutral, in the sense that changing the number of people in those countries is worth a surprisingly large reduction in average income (>2% income reduction for a 1% population increase/decrease).

Part of what I wrote used an assumption that shared by all the utilitarian population ethics principles I know of: if you prevent the birth of someone with utility X and cause the birth of someone else with utility Y (with Y > X), that's just as good as causing a not-yet-born person to have utility Y instead of X. In fact, population ethics is not needed to make this comparison, since neither outcome changes the population size. But it's not too far-fetched to think that the two situations are different: in the first one, the Y-utility person is a different person from the X-utility person, while in the second one they could be argued to be the same person. Good arguments have been made that the second outcome actually produces a different person because very small things, like which egg/sperm you came from, can change your identity (Parfit's Nonidentity Problem). So I think my assumption is reasonable, but I'm concerned that I don't know what the best arguments against it are.

What are the most well-known utilitarian or non-utilitarian consequentialist theories that make a distinction "different future people" and "the same future person"? Is there a consistent way to make this distinction "fuzzy" so that an event like being conceived by a different sperm is less "identity-changing" than being born on the other side of the world to completely different parents?

[LINK] Engineering General Intelligence (the OpenCog/CogPrime book)

9 Mark_Friedenbach 11 August 2014 07:35PM

Ben Goertzel has made available a pre-print copy of his book Engineering General Intelligence (Vol1, Vol2). The first volume is basically the OpenCog organization's roadmap to AGI, and the second volume a 700 page overview of the design.

Open thread, 11-17 August 2014

4 David_Gerard 11 August 2014 10:12AM

Previous open thread

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Ethical Choice under Uncertainty

2 Anders_H 10 August 2014 10:13PM

Ethical Choice under Uncertainty:

Most discussions about utilitarian ethics are attempt to determine the goodness of an outcome.  For instance, discussions may focus on whether it would be ethical to increase total utility by increasing the total number of individuals but reducing their average utility.  Or,  one could argue about whether we should give more weight to those who are worst off when we aggregate utility over individuals.  

These are all important questions. However, even if they were answered to everyone's satisfaction, the answers would not be sufficient to guide the choices of agents acting under uncertainty. To elaborate,  I believe textbook versions of utilitarianism are unsatisfactory for the following reasons: 

  1. Ethical theories that don't account for the agent's beliefs will have absurd consequences such as claiming that it is unethical to rescue a drowning child if the child goes on to become Hitler.  Clearly, if we are interested in judging whether the agent is acting ethically, the only relevant consideration is his beliefs about the consequences at the time the choice is made. If we define "ethics" to require him to act on information from the future, it becomes impossible in principle to act ethically.
  2. In real life, there will be many situations where the agent makes a bad choice because he has incorrect beliefs about the consequences of his actions.  For most people, if they were asked to judge the morality of a person who has pushed a fat man to his death,  it is important to know whether the man believed he could save the lives of five children by doing so.   Whether the belief is correct or not is not ethically relevant:  There is a difference between stupidity and immorality.  
  3. The real choices are never of the type  "If you choose A, the fat man dies with probability 1,  whereas if you choose B, the five children die with probability 1".   Rather, they are of the type "If you choose A, the fat man dies with probability 0.5, the children die with probability 0.25 and they all die with probablity 0.25".   Choosing between such options will require a formalization of the concept of risk aversion as an integral component of the ethical theory. 

I will attempt to fix this by providing the following definition of ethical choice, which is based on the same setup as Von Neumann Morgenstern Expected Utility Theory:

An agent is making a decision, and can choose from a choice set A, with elements  (a1, a2, an). The possible outcome states of the world are contained in the set W, with elements (w1,w2,wm).  The agent is uncertain about the consequences of his choice; he is not able to perfect predict whether choosing a1 will lead to state w1, w2 or wm. In other words, for every element of the choice set, he has a separate subjective probability distribution ("prior") on W.

He also has a cardinal social welfare function f over possible states of the world.    The social welfare function may have properties such as risk aversion or risk neutrality over attributes of W.   Since the choice made by the agent is one aspect of the state of the world, the social welfare function may include terms for A. 

We define that the agent is acting "ethically" if he chooses the element of the choice set that maximizes the expected value of the social welfare function, under the agent's beliefs about the probability of each possible state of the world that could arise under that action:

Max Σw Pr (W|a) * f(W, a)

Note here that "risk aversion" is defined as the second derivative of the social welfare function. For details, I will unfortunately have to refer the reader to a textbook on Decision Theory, such as Notes on the Theory of Choice.

The advantage of this setup is that it allows us to define the ethical choice precisely, in terms of the intentions and beliefs of the agent. For example, if an individual makes a bad choice because he honestly has a bad prior about the consequences of his choice, we interpret him as acting stupidly, but not unethically.  However, ignorance is not a complete "get out of jail for free" card:  One element of the choice set is always "seek more information / update your prior".  If your true prior says that you can maximize the expected social welfare function by updating your prior, the ethical choice is to seek more information  (this is analogous to the decision theoretic concept "value of information"). 

At this stage, the “social welfare function” is completely unspecified. Therefore, this definition places only minor constraints on what we mean by the word “ethics”.  Some ethical theories are special cases of this definition of ethical choice.   For example, deontology is the special case where the social welfare function f(W,A) is independent of the state of the world, and can be simplified to f(A).  (If the social welfare function is constant over W, the summation over the prior will cancel out)

One thing that is ruled out by the definition, is outcome-based consequentialism, where an agent is defined to act ethically if his actions lead to good realized outcomes.  Note that under this type of consequentialism, at the time a decision is made it is impossible for an agent to know what the correct choice is, because the ethical choice will depend on random events that have not yet taken place.  This definition of ethics excludes strategies that cannot be followed by a rational agent acting solely on information from the past. This is a feature, not a bug. 

We now have a definition of acting ethically. However, it is not yet very useful: We have no way of knowing what the social welfare function looks like. The model simply rules out some pathological ethical theories that are not usable as decision theories, and gives us an appealing definition of ethical choice that allows us to distinguish "ignorance/stupidity" from "immorality". 

If nobody points out any errors that invalidate my reasoning, I will write another installment with some more speculative ideas about how we can attempt to determine what the social welfare function f(W,A) looks like

 

--

I have no expertise in ethics, and most my ideas will be obvious to anyone who has spent time thinking about decision theory. From my understanding of Cake or Death , it looks like similar ideas have been explored here previously, but with additional complications that are not necessary for my argument.   I am puzzled by the fact that this line of thinking is not a central component of most ethical discussions, because I don't believe that it is possible for a non-Omega agent to follow an ethical theory that does not explicitly account for uncertainty. My intuition  is that unless there is a flaw in my reasoning, this is a neglected point that it would be important to draw people's attention to, in a simple form with as few complications as possible.  Hence this post. 

This is a work in progress, I would very much appreciate feedback on where it needs more work. 

Some thoughts on where this idea needs more work:

  • While agents who have bad priors about the consequences of their actions are defined to act stupidly and not unethically, I am currently unclear about how to interpret the actions of agents who have incorrect beliefs about the social welfare function.  
  • I am also unsure if this setup excludes some reasonable forms of ethics, such as a scenario where we model the agent is simultaneously trying to optimize the social welfare function and his own utility function. In such a setup, we may want to have a definition of ethics that involves the rate of substitution between the two things he is optimizing.  However, it is possible that this can be handled within my model, by finding the right social welfare function.  

 

Meditations on Löb's theorem and probabilistic logic [LINK]

8 Quinn 10 August 2014 09:41PM

A post on my own blog following a MIRIx workshop from two weekends ago.

http://qmaurmann.wordpress.com/2014/08/10/meditations-on-l-and-probabilistic-logic/

Reproducing the intro:

This post is a second look at The Definability of Truth in Probabilistic Logic, a preprint by Paul Christiano and other Machine Intelligence Research Institute associates, which I first read and took notes on a little over one year ago.

In particular, I explore relationships between Christiano et al’s probabilistic logic and stumbling blocks for self-reference in classical logic, like the liar’s paradox (“This sentence is false”) and in particular Löb’s theorem.

The original motivation for the ideas in this post was an attempt to prove a probabilistic version of Löb’s theorem to analyze the truth-teller sentences (“This sentence is [probably] true”) of probabilistic logic, an idea that came out of some discussions at a MIRIx workshop that I hosted in Seattle.

Every Paul needs a Jesus

9 PhilGoetz 10 August 2014 07:13PM

My take on some historical religious/social/political movements:

  • Jesus taught a radical and highly impractical doctrine of love and disregard for one's own welfare. Paul took control of much of the church that Jesus' charisma had built, and reworked this into something that could function in a real community, re-emphasizing the social mores and connections that Jesus had spent so much effort denigrating, and converting Jesus' emphasis on radical social action into an emphasis on theology and salvation.
  • Marx taught a radical and highly impractical theory of how workers could take over the means of production and create a state-free Utopia. Lenin and Stalin took control of the organizations built around those theories, and reworked them into a strong, centrally-controlled state.
  • Che Guevara (I'm ignorant here and relying on Wikipedia; forgive me) joined Castro's rebel group early on, rose to the position of second in command, was largely responsible for the military success of the revolution, and had great motivating influence due to his charisma and his unyielding, idealistic, impractical ideas. It turned out his idealism prevented him from effectively running government institutions, so he had to go looking for other revolutions to fight in while Castro ran Cuba.
  • Lauren Faust envisioned a society built on friendship, toleration, and very large round eyes, and then Hasbro... naw, just kidding. (Mostly.)

The best strategy for complex social movements is not honest rationality, because rational, practical approaches don't generate enthusiasm. A radical social movement needs one charismatic radical who enunciates appealing, impractical ideas, and another figure who can appropriate all of the energy and devotion generated by the first figure's idealism, yet not be held to their impractical ideals. It's a two-step process that is almost necessary, to protect the pretty ideals that generate popular enthusiasm from the grit and grease of institution and government. Someone needs to do a bait-and-switch. Either the original vision must be appropriated and bent to a different purpose by someone practical, or the original visionary must be dishonest or self-deceiving.

continue reading »

Tarski's truth sentences and MIRI's AI

1 halcyon 09 August 2014 07:28PM

(Disclaimer: I have no training in or detailed understanding of these subjects. I first heard of Tarski from the Litany of Tarski, and then I Googled him.)

In his paper The Semantic Conception of Truth, Tarski says that he analyzes the claim, '"Snow is white" is true if and only if snow is white' as being expressed in two different languages. The whole claim in single quotes is expressed in a metalanguage, while "snow is white" is in another language.

For Tarski's proof to succeed, it is (if I understood him correctly) both necessary and sufficient for the metalanguage to be logically richer than the other language in certain ways. What these ways are is, according to Tarski, difficult to make general statements about without actually following his very involved technical proof.

If I remember correctly, this implies that the two languages cannot be identical. Tarski seems to be of the opinion that for a given language satisfying specific conditions, concepts of truth, synonymy, meaning, etc. can be defined for it in a metalanguage that is richer than it in logical devices, establishing a hierarchy of truth defining languages.

My main question is, since MIRI aims to mathematically prove Friendliness in recursively self-improving AI, is "essential richness" in language handling ability something we should expect to see increasing in the class of AIs MIRI is interested in, or is that unnecessary for MIRI's purposes? I understand that semantically defining truth and meaning may not be important either way. My principal motive is curiosity.

Why humans suck: Ratings of personality conditioned on looks, profile, and reported match

9 PhilGoetz 09 August 2014 06:48PM

The recent OKCupid blog, which gwern mentioned in Media Open Thread, investigated the impact of three different factors on users' perceptions of each other: authority (reported match %), profile text (present or absent), and looks.

continue reading »

The greatest good for the greatest number - starting soonest, or ending last, or lasting longest?

-3 Trevor_Blake 09 August 2014 02:15AM

The first greatest good for the greatest number for the greatest number will start "first" (by whatever measurement is applied) but ends before the second greatest good ends and doesn't last as long (in total) as the third greatest good.

The second greatest good for the greatest number will start end "last" (by whatever measurement is applied), but does not last as long as the third greatest good (in total)and doesn't start as soon as the first greatest good.

The third greatest good for the greatest number lasts the longest (in total), but ends before the second greatest good ends and starts after the first greatest good starts.

What within utilitarianism allows for selecting between these three greatest good for the greatest number?

View more: Next