Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Why people want to die

50 PhilGoetz 24 August 2015 08:13PM

Over and over again, someones says that living for a very long time would be a bad thing, and then some futurist tries to persuade them that their reasoning is faulty.  They tell them that they think that way now, but they'll change their minds when they're older.

The thing is, I don't see that happening.  I live in a small town full of retirees, and those few I've asked about it are waiting for death peacefully.  When I ask them about their ambitions, or things they still want to accomplish, they have none.

Suppose that people mean what they say.  Why do they want to die?

continue reading »

I have just donated $10,000 to the Immortality Bus, which was the most rational decision of my life

0 turchin 18 July 2015 01:13PM

I have non-zero probability to die next year. In my age of 42 it is not less than 1 per cent, and probably more. I could do many investment which will slightly lower my chance of dying – from healthy life style to cryo contract.  And I did many of them.

From economical point of view the death is at least loosing all you capital.

If my net worth is something like one million (mostly real estate and art), and I have 1 per cent chance to die, it is equal to loosing 10 k a year. But in fact more, because death it self is so unpleasant that it has large negative monetary value. And also I should include the cost of lost opportunities.

Once I had a discussion with Vladimir Nesov about what is better: to fight to immortality, or to create Friendly AI which will explain what is really good. My position was that immortality is better because it is measurable, knowable, and has instrumental value for most other goals, and also includes prevention of worst thing on earth which is the Death. Nesov said (as I remember) that personal immortality does not matter as much total value of humanity existence, and more over, his personal existence has no much value at all. All what we need to do is to create Friendly AI. I find his words contradictory because if his existence does not matter, than any human existence also doesn’t matter, because there is nothing special about him.

But later I concluded that the best is to make bets that will raise the probability of my personal immortality, existential risks prevention and creation of friendly AI simultaneously. Because it is easy to imagine situation where research in personal immortality like creation technology for longevity genes delivery will contradict our goal of existential risks reduction because the same technology could be used for creating dangerous viruses.

The best way here is invest in creating regulating authority which will be able to balance these needs, and it can’t be friendly AI because such regulation needed before it will be created.

That is why I think that US needs Transhumanist president. A real person whose value system I can understand and support. And that is why I support Zoltan Istvan for 2016 campaign.

Me and Exponential Technologies Institute donated 10 000 USD for Immortality bus project. This bus will be the start of Presidential campaign for the writer of “Transhumanist wager”. 7 film crews agreed to cover the event. It will create high publicity and cover all topics of immortality, aging research, Friendly AI and x-risks prevention. It will help to raise more funds for such type of research. 


LW Women Entries- Creepiness

7 [deleted] 28 April 2013 03:43PM

Standard Intro

The following section will be at the top of all posts in the LW Women series.

Several months ago, I put out a call for anonymous submissions by the women on LW, with the idea that I would compile them into some kind of post.  There is a LOT of material, so I am breaking them down into more manageable-sized themed posts.

Seven women replied, totaling about 18 pages. 

Standard Disclaimer- Women have many different viewpoints, and just because I am acting as an intermediary to allow for anonymous communication does NOT mean that I agree with everything that will be posted in this series. (It would be rather impossible to, since there are some posts arguing opposite sides!)

To the submitters- If you would like to respond anonymously to a comment (for example if there is a comment questioning something in your post, and you want to clarify), you can PM your message and I will post it for you. If this happens a lot, I might create a LW_Women sockpuppet account for the submitters to share.

Please do NOT break anonymity, because it lowers the anonymity of the rest of the submitters.

Submitter D

The class that a lot of creepiness falls into for me is not respecting my no.  Someone who doesn't respect a small no can't be trusted to respect a big one, when we're alone and I have fewer options to enforce it beside physical strength.  Sometimes not respecting a no can be a matter of omission or carelessness, but I can't tell which.  

While I'm in doubt, I'm not assuming the worst of you, but I'm on edge and alertly looking for new data in a way that's stressful for me and makes it hard for either of us to enjoy the encounter.  And I'm sure as heck not going anywhere alone with you.

I've written up some short anecdotes that involved someone not respecting or constraining a no.  They're at a range of intensities.

Joining someone for the first time and sitting down in a spot that blocks their exit from the conversation.  Sometimes unavoidable (imagine joining people at a booth) but limits my options to exit and enforce a no.

Blocking an exit less literally by coming across as the kind of person who can't end a conversation (follows you between circles at a party, limits your ability to talk to other people, etc).

Asking for a number instead of offering yours.  If I want to call you, I will, but when you ask for my number, I can't stop you calling or harassing me in the future.

Asking for a number while blocking my exit.  This has happened to me in cabs when I take them late at night.  It's bad to start with because I can't exit a moving car and I can't control the direction it's going in.  One driver waited to the end of the ride, asked for my number, and then handed my reciept back and demanded it when I didn't comply.  I had to write down a fake one to get out without escalating.  This is why I'm torn between walking through a deserted part of town or taking a cab alone at night.

Talking about other girls who gave you "invalid" nos.  Anything on the order of "She was flirting with me all night and then she wouldn't put out/call me back/meet for coffee."  Responding positively to you is not a promise to do anything else, and it's not leading you on.  This kind of assumption is why I'm a little hesitant to be warm to a strange guy if I'm in a place where it would be hard to enforce a no.

Withholding information to constrain my no.  The culprit here was a girl and the target was a friend of mine.  The two of them had gone on a date and set a time to meet again and possibly have sex.  The girl had a boyfriend, but was in some kind of open relationship and had informed my friend of this fact.  What she didn't disclose was that the boyfriend was back in town the night of their second date.  She waited to reveal that until my friend had turned up.  My friend still had the power to say no, and did, but there was nothing preventing the girl from disclosing that data earlier, when my friend could have postponed or demurred by text.  Waiting til she'd already shlepped to the apartment put more pressure on her.  It suggested the girl would rather rig the game than respect a no.

Overstepping physical boundaries and then assigning the blame to me.  You might go for a kiss in error or touch me in a way I'm not comfortable with.  Say sorry and move on.  Don't say, "You looked like you wanted to be kissed."  That implies my no is less valid if you're confused.  

Can You Give Support or Feedback for My Program to Alleviate Poverty?

10 Brendon_Wong 25 June 2015 11:18PM

Hi LessWrong,

Two years ago, when I travelled to Belize, I came up with an idea for a self-sufficient scalable program to address poverty. I saw how many people in Belize were unemployed or getting paid very low wages, but I also saw how skilled they were, a result of English being the national language and a mandatory education system. Many Belizeans have a secondary/high school education in Belize, and the vast majority have at least a primary school education and can speak English. I thought to myself, "it's too bad I can't teleport Belizeans to the United States, because in the U.S., they would automatically be able to earn many times more the minimum wage in Belize with their existing skills."

But I knew there was a way to do it: "virtual teleportation." My solution involves using computer and internet access in conjunction with training and support to connect the poor with high paying international work opportunities. My tests of virtual employment using Upwork and Amazon Mechanical Turk show that it is possible to earn at least twice the minimum wage in Belize, around $3 an hour, working with flexible hours. This solution is scalable because there is a consistent international demand for very low wage work (relatively speaking) from competent English speakers, and in other countries around the world like South Africa, many people matching that description can be found and lifted out of poverty. The solution could become self-sufficient because running a virtual employment enterprise or taking a cut of the earnings of members using virtual employment services (as bad as that sounds) can generate enough income to pay for the relatively low costs of monthly internet and the one-time costs of technology upgrades.

If you have any feedback, comments, suggestions, I would love to hear about it in the comments section. Feedback on my fundraising campaign at igg.me/at/bvep is also greatly appreciated.

If you are thinking about supporting the idea, my team and I need your help to make this possible. It may be difficult for us to reach our goal, but every contribution greatly increases the chances our fundraiser and our program will be successful, especially in the early stages. All donations are tax-deductible, and if you’d like, you can also opt-in for perks like flash drives and t-shirts. It only takes a moment to make a great difference: igg.me/at/bvep.

Thank you for reading!

Giving What We Can needs your help!

23 RobertWiblin 29 May 2015 04:30PM

As you probably know, Giving What We Can exists to move donations to the charities that can most effectively help others. Our members take a pledge to give 10% of their incomes for the rest of their life to the most impactful charities. Along with other extensive resources for donors such as GiveWell and OpenPhil, we produce and communicate, in an accessible way, research to help members determine where their money will do the most good. We also impress upon members and the general public the vast differences between the best charities and the rest.

Many LessWrongers are members or supporters, including of course the author of Slate Star Codex. We also recently changed our pledge so that people could give to whichever cause they felt best helped others, such as existential risk reduction or life extension, depending on their views. Many new members now choose to do this.

What you might not know is that 2014 was a fantastic year for us - our rate of membership growth more than tripled! Amazingly, our 1066 members have now pledged over $422 million, and already given over $2 million to our top rated charities. We've accomplished this on a total budget of just $400,000 since we were founded. This new rapid growth is thanks to the many lessons we have learned by trial and error, and the hard work of our team of staff and volunteers.

To make it to the end of the year we need to raise just another £110,000. Most charities have a budget in the millions or tens of millions of pounds and we do what we do with a fraction of that.

We want to raise the money as quickly as possible, so that our staff can stop focusing on fundraising (which takes up a considerable amount of energy), and get back to the job of growing our membership.

Some of our supporters are willing to sweeten the deal as well: if you haven't given us more than £1,000 before, then they'll match 1:1 a gift between £1,000 and £5,000.

You can give now or email me (robert dot wiblin at centreforeffectivealtruism dot org) for our bank details. Info on tax deductible giving from the USA and non-UK Europe are also available on our website.

What we are doing this year

The second half of this year is looking like it will be a very exciting for us. Four books about effective altruism are being released this year, including one by our own trustee William MacAskill, which will be heavily promoted in the US and UK. The Effective Altruism Summit is also turning into 'EA Global' with events at Google Headquarters in San Francisco, Oxford University and Melbourne, headlined by Elon Musk.

Tens, if not hundreds of thousands of people will be finding out about our philosophy of effective giving for the first time.

To do these opportunities justice Giving What We Can needs to expand its staff to support its rapidly growing membership and local chapters, and ensure we properly follow up with all prospective members. We want to take people who are starting to think about how they can best make the world a better place, and encourage them to make a serious long-term commitment to effective giving, and help them discover where their money can do the most good.

Looking back at our experience over the last five years, we estimate that each $1 given to Giving What We Can has already moved $6, and will likely end up moving between $60 and $100 to the most effective charities in the world. (This are time discounted, counterfactual donations, only to charities we regard very highly. Check out this report for more details.)

This represents a great return on investment, and I would be very sad if we couldn't take these opportunities just because we lacked the necessary funding.

Our marginal hire

If we don't raise this money we will not have the resources to keep on our current Director of Communications. He has invaluable experience as a Communications Director for several high-profile Australian politicians, which has given him skills in web-development, public relations, graphic design, public speaking and social media. Amongst the things he has already achieved in his three months here are: automation of the book-keeping on our Trust (saving huge amounts of time and minimising errors), very much improved our published materials including our fundraising prospectus, written a press release and planned a media push to capitalise on our getting to 1,000 members and Peter Singer’s book release in the UK.

His wide variety of skills mean that there are a large number of projects he would be capable of doing which would increase our member growth, and we are keen for him to test a number of these. His first project would be to optimise our website to make the most of the increased attention effective altruism will be generating over the summer and turn that into people actually donating 10% of their incomes to the most effective causes. In the past we have had trouble finding someone with such a broad set of crucial skills. Combined with how swiftly and well he has integrated into our team, it would be a massive loss to have to let him go and later down the line need to try to recruit a replacement.

As I wrote earlier you can give now or email me (robert dot wiblin at centreforeffectivealtruism dot org) for bank details or personalised advice on how to give best. If you need tax deductibility in another country check these pages on the USA and non-UK Europe.

I'm happy to take questions here or by email!

Dangers of steelmanning / principle of charity

88 gothgirl420666 16 January 2014 06:35AM

As far as I can tell, most people around these parts consider the principle of charity and its super saiyan form, steelmanning, to be Very Good Rationalist Virtues. I basically agree and I in fact operate under these principles more or less automatically now. HOWEVER, no matter how good the rule is, there are always exceptions, which I have found myself increasingly concerned about.

This blog post that I found in the responses to Yvain's anti-reactionary FAQ argues that even though the ancient Romans had welfare, this policy was motivated not for concern for the poor or for a desire for equality like our modern welfare policies, but instead "the Roman dole was wrapped up in discourses about a) the might and wealth of Rome and b) goddess worship... The dole was there because it made the emperor more popular and demonstrated the wealth of Rome to the people. What’s more, the dole was personified as Annona, a goddess to be worshiped and thanked." 

So let's assume this guy is right, and imagine that an ancient Roman travels through time to the present day. He reads an article by some progressive arguing (using the rationale one would typically use) that Obama should increase unemployment benefits. "This makes no sense," the Roman thinks to himself. "Why would you give money to someone who doesn't work for it? Why would you reward lack of virtue? Also, what's this about equality? Isn't it right that an upper class exists to rule over a lower class?" Etc. 

But fortunately, between when he hopped out of the time machine and when he found this article, a rationalist found him and explained to him steelmanning and the principle of charity. "Ah, yes," he thinks. "Now I remember what the rationalist said. I was not being so charitable. I now realize that this position kind of makes sense, if you read between the lines. Giving more unemployment benefits would, now that I think about it, demonstrate the power of America to the people, and certainly Annona would approve. I don't know why whoever wrote this article didn't just come out and say that, though. Maybe they were confused". 

Hopefully you can see what I'm getting at. When you regularly use the principle of charity and steelmanning, you run the risk of:

1. Sticking rigidly to a certain worldview/paradigm/established belief set, even as you find yourself willing to consider more and more concrete propositions. The Roman would have done better to really read what the modern progressive's logic was, think about it, and try to see where he was coming from than to automatically filter it through his own worldview. If he consistently does this he will never find himself considering alternative ways of seeing the world that might be better.  

2. Falsely developing the sense that your worldview/paradigm/established belief set is more popular than it is. Pretty much no one today holds the same values that an ancient Roman does, but if the Roman goes around being charitable all the time then he will probably see his own beliefs reflected back at him a fair amount.

3. Taking arguments more seriously than you possibly should. I feel like I see all the time on rationalist communities people say stuff like "this argument by A sort of makes sense, you just need to frame it in objective, consequentialist terms like blah blah blah blah blah" and then follow with what looks to me like a completely original thought that I've never seen before. But why didn't A just frame her argument in objective, consequentialist terms? Do we assume that what she wrote was sort of a telephone-game approximation of what was originally a highly logical consequentialist argument? If so where can I find that argument? And if not, why are we assuming that A is a crypto-consequentialist when she probably isn't? And if we're sure that objective, consequentialist logic is The Way To Go, then shouldn't we be very skeptical of arguments that seem like their basis is in some other reasoning system entirely? 

4. Just having a poor model of people's beliefs in general, which could lead to problems.

Hopefully this made sense, and I'm sorry if this is something that's been pointed out before.

Rational discussion of politics

13 cleonid 25 April 2015 09:58PM

In a recent poll, many LW members expressed interest in a separate website for rational discussion of political topics. The website has been created, but we need a group of volunteers to help us test it and calibrate its recommendation system (see below).

If you would like to help (by participating in one or two discussions and giving us your feedback) please sign up here.



About individual recommendation system

All internet forums face a choice between freedom of speech and quality of debate. In absence of censorship, constructive discussions can be easily disrupted by the inflow of the mind-killed which causes the more intelligent participants to leave or descend to the same level.

Preserving quality thus usually requires at least one of the following methods:

  1.  Appointing censors (a.k.a. moderators).
  2.  Limiting membership.
  3.  Declaring certain topics (e.g., politics) off limits.

On the new website, we are going to experiment with a different method. In brief, the idea is to use an automated recommendation system which sorts content, raising the best comments to the top and (optionally) hiding the worst. The sorting is done based on the individual preferences, allowing each user to avoid what he or she (rather than moderators or anyone else) defines as low quality content. In this way we should be able to enhance quality without imposing limits on free speech.


UPDATE. The discussions are scheduled to start on May 1. 

Limited agents need approximate induction

8 Manfred 24 April 2015 07:42AM

[This post borders on some well-trodden ground in information theory and machine learning, so ideas in this post have an above-average chance of having already been stated elsewhere, by professionals, better. EDIT: As it turns out, this is largely the case, under the subjects of the justifications for MML prediction and Kolmogorov-simple PAC-learning.]

I: Introduction

I am fascinated by methods of thinking that work for well-understood reasons - that follow the steps of a mathematically elegant dance. If one has infinite computing power the method of choice is something like Solomonoff induction, which is provably ideal in a certain way at predicting the world. But if you have limited computing power, the choreography is harder to find.

To do Solomonoff induction, you search through all Turing machine hypotheses to find the ones that exactly output your data so far, then use the weighted average of those perfect retrodictors to predict the next time step. So the naivest way to build an ideal limited agent is to merely search through lots of hypotheses (chosen from some simple set) rather than all of them, and only run each Turing machine for time less than some limit. At least it's guaranteed to work in the limit of large computing power, which ain't nothing.

Suppose then that we take this nice elegant algorithm for a general predictor, and we implement it on today's largest supercomputer, and we show it the stock market prices from the last 50 years to try to predict stocks and get very rich. What happens?

Bupkis happens, that's what. Our Solomonoff predictor tries a whole lot of Turing machines and then runs out of time before finding any useful hypotheses that can perfectly replicate 50 years of stock prices. This is because such useful hypotheses are very, very, very rare.

We might then turn to the burgeoning field of logical uncertainty, which has a major goal of handling intractable math problems in an elegant and timely manner. We are logically uncertain about what distribution Solomonoff induction will output, so can we just average over that logical uncertainty to get some expected stock prices?

The trouble with this is that current logical uncertainty methods rely on proofs that certain outputs are impossible or contradictory. For simple questions this can narrow down the answers, but for complicated problems it becomes intractable, replacing the hard problem of evaluating lots of Turing machines with the hard problem of searching through lots and lots of proofs about lots of Turing machines - and so again our predictor runs out of time before becoming useful.

In practice, the methods we've found to work don't look very much like Solomonoff induction. Successful methods don't take the data as-is, but instead throw some of it away: curve fitting and smoothing data, filtering out hard-to-understand signals as noise, and using predictive models that approximate reality imperfectly. The sorts of things that people trying to predict stocks are already doing. These methods are vital to improve computational tractability, but are difficult (to my knowledge) to fit into a framework as general as Solomonoff induction.

II: Rambling

Suppose that our AI builds a lot of models of the world, including lossy models. How should it decide which models are best to use for predicting the world? Ideally we'd like to make a tradeoff between the accuracy of the model, measured in the expected utility of how accurate you expect the model's predictions to be, and the cost of the time and energy used to make the prediction.

Once we know how to tell good models, the last piece would be for our agent to make the explore/exploit tradeoff between searching for better models and using its current best.

There are various techniques to estimate resource usage, but how does one estimate accuracy?

Here was my first thought: If you know how much information you're losing (e.g. by binning data), for discrete distributions this sets the Shannon information of the ideal value (given by Solomonoff prediction) given the predicted value. This uses the relationship between information in bits of data and Shannon information that determines how sharp your probability distribution is allowed to be.

But with no guarantees about the normality (or similar niceness properties) of the ideal value given the prediction, this isn't very helpful. The problem is highlighted by hurricane prediction. If hurricanes behaved nicely as we threw away information, weather models would just be small, high-entropy deviations from reality. Instead, hurricanes can change route greatly even with small differences in initial conditions.

The failure of the above approach can be explained in a very general way: it uses too little information about the model and the data, only the amount of information thrown away. To do better, our agent has to learn a lot from its training data - a subject that workers in AI have already been hard at work on. On the one hand, it's a great sign if we can eventually connect ideal agents to current successful algorithms. On the other, doing so elegantly seems like a hard problem.

To sum up in the blandest possible way: If we want to build successful predictors of the future with limited resources, they should use their experience to learn approximate models of the world.

The real trick, though, is going to be to set this on a solid foundation. What makes a successful method of picking models? As we lack access to the future (yet! Growth mindset!), we can't grade models based on their future predictions unless we descend to solipsism and grade models against models. Thus we're left with grading models based on how well they retrodict the data so far. Sound familiar? The foundation we want seems like an analogue to Solomonoff induction, one that works for known reasons but doesn't require perfection.

III:  An Example

Here's a paradigm that might or might not be a step in the right direction, but at least gestures at what I mean.

The first piece of the puzzle is that a model that gets proportion P of training bits wrong can be converted to a Solomonoff-accepted perfectly-precise model just by specifying the bits it gets wrong. Suppose we break the model output (with total length N) into chunks of size L, and prefix each chunk with the locations of the wrong bits in that chunk. Then the extra data required to rectify an approximate model is at most N/L·log(P·L)+N·P·log(L). Then the hypothesis where the model is right about the next bit is simpler than the hypothesis when it's wrong, because when the model is right you don't have to spend ~log(L) bits correcting it.

In this way, Solomonoff induction natively cares about some approximate models' predictions. There are some interesting details here that are outside the focus of this particular post. Does using the optimal chunk length lead to Solomonoff induction reflecting model accuracy correctly? What are some better schemes for rectifying models that handle things like models that output probabilities? The point is just that even if your model is wrong on fraction P of the training data, Solomonoff induction will still promote it as long as it's simpler than N-N/L·log(P·L)-N·P·log(L).

The second piece of the puzzle is that induction can be done over processed functions of observations, like smoothing the data or filtering difficult-to-predict parts (noise) out. If this processing increases the accuracy of models, we can use this to make high-accuracy models of functions the training data, and then use those models to predict the the processed future observations as above.

These two pieces allow an agent to use approximate models, and to throw away some of its information, and still have its predictions work for the same reason as Solomonoff induction. We can use this paradigm to interpret what an algorithm like curve fitting is doing - the fitted curve is a high-accuracy retrodiction of some smoothed function of the data, which therefore does a good job of predicting what that smoothed function will be in the future.

There are some issues here. If a model that you are using is not the simplest, it might have overfitting problems (though perhaps you can fix this just by throwing away more information than naively appears necessary) or systematic bias. More generally, we haven't explored how models get chosen; we've made the problem easier to brute force but we need to understand non-brute force search methods and what their foundations are. It's a useful habit to keep in mind what actually works for humans - as someone put it to me recently, "humans can make models they understand that work for reasons they understand."

Furthermore, this doesn't seem to capture reductionism well. If our agent learns some laws of physics and then is faced with a big complicated situation it needs to use a simplified model to make a prediction about, it should still in some sense "believe in the laws of physics," and not believe that this complicated situation violates the laws physics even if its current best model is independent of physics.

IV: Logical Uncertainty

It may be possible to relate this back to logical uncertainty - where by "this" I mean the general thesis of predicting the future by building models that are allowed to be imperfect, not the specific example in part III. Soares and Fallenstein use the example of a complex Rube Goldberg machine that deposits a ball into one of several chutes. Given the design of the machine and the laws of physics, suppose that one can in principle predict the output of this machine, but that the problem is much too hard for our computer to do. So rather than having a deterministic method that outputs the right answer, a "logical uncertainty method" in this problem is one that, with a reasonable amount of resources spent, takes in the description of the machine and the laws of physics, and gives a probability distribution over the machine's outputs.

Meanwhile, suppose that we take an approximately inductive predictor and somehow teach it the the laws of physics, then ask it to predict the machine. We'd like it to make predictions via some appropriately simplified folk model of physics. If this model gives a probability distribution over outcomes - like in the simple case of "if you flip this coin in this exact way, it has a 50% shot at landing heads" - doesn't that make it a logical uncertainty method? But note that the probability distribution returned by a single model is not actually the uncertainty introduced by replacing an ideal predictor with a resource-limited predictor. So any measurement of logical uncertainty has to factor in the uncertainty between models, not just the uncertainty within models.

Again, we're back to looking for some prediction method that weights models with some goodness metric more forgiving than just using perfectly-retrodicting Turing machines, and which outputs a probability distribution that includes model uncertainty. But can we apply this to mathematical questions, and not just Rube Goldberg machines? Is there some way to subtract away the machine and leave the math?

Suppose that our approximate predictor was fed math problems and solutions, and built simple, tractable programs to explain its observations. For easy math problems a successful model can just be a Turing machine that finds the right answer. As the math problems get more intractable, successful models will start to become logical uncertainty methods, like how we can't predict a large prime number exactly, but we can predict it's last digit is 1, 3, 7, or 9. Within this realm we have something like low-level reductionism, where even though we can't find a proof of the right answer, we still want to act as if mathematical proofs work and all else is ignorance, and this will help us make successful predictions.

Then we have complicated problems that seem to be beyond this realm, like P=NP. Humans certainly seem to have generated some strong opinions about P=NP without dependence on mathematical proofs narrowing down the options. It seems to such humans that the genuinely right procedure to follow is that, since we've searched long and hard for a fast algorithm for NP-complete problems without success, we should update in the direction that no such algorithm exists. In approximate-Solomonoff-speak, it's that P!=NP is consistent with a simple, tractable explanation for (a recognizable subset of) our observations, while P=NP is only consistent with more complicated tractable explanations. We could absolutely make a predictor that reasons this way - it just sets a few degrees of freedom. But is it the right way to reason?

For one thing, this seems like it's following Gaifman's proposed property of logical uncertainty, that seeing enough examples of something should convince you of it with probability 1 - which has been shown to be "too strong" in some sense (it assigns probability 0 to some true statements - though even this could be okay if those statements are infinitely dilute). Does the most straightforward implementation actually have the Gaifman condition, or not? (I'm sorry, ma'am. Your daughter has... the Gaifman condition.)

This inductive view of logical uncertainty lacks the consistent nature of many other approaches - if it works, it does so by changing approaches to suit the problem at hand. This is bad if you want your logical uncertainty methods to be based on a simple prior followed by some kind of updating procedure. But logical uncertainty is supposed to be practical, after all, and at least this is a simple meta-procedure.

V: Questions

Thanks for reading this post. In conclusion, here are some of my questions:

What's the role of Solomonoff induction in approximate induction? Is Solomonoff induction doing all of the work, or is it possible to make useful predictions using tractable hypotheses Solomonoff induction would exclude, or excluding intractable hypotheses Solomonoff induction would have to include?

Somehow we have to pick out models to promote to attention in the first place. What properties make a process for this good or bad? What methods for picking models can be shown to still lead to making useful predictions - and not merely in the limit of lots of computing time?

Are humans doing the right thing by making models they understand that work for reasons they understand? What's up with that reductionism problem anyhow?

Is it possible to formalize the predictor discussed in the context of logical uncertainty? Does it have to fulfill Gaifman's condition if it finds patterns in things like P!=NP?

If You Like This Orange...

-27 [deleted] 01 April 2015 02:42AM

If you like this orange you must like that orange.  Well, maybe.  Tastes change, and maybe I already had an orange a little while ago, and maybe I'm not in the mood while someone else would be glad to have it, so it doesn't follow that because I liked this orange I must like that orange.

Comparing oranges and oranges seems like a set of two objects, but it's really four.  There's you, there's the orange, there's the other orange, and there's the perceived relation between you and the two oranges.  When it's just you and the oranges, things usually find a simple way work themselves out.

But when someone else comes into the room it's seldom oranges and oranges.   Other people are ever ready to tell you what you like.  If you like this orange you must like that apple, because they're both fruit.  Nah, can't stand apples unless they are baked.  It doesn't matter that they are both fruit, I don't care for apples.  Then the helping helpers will infer the inverse.  If you like this orange you can't like that apple.  Watch me - I'll like an apple just to spite you, or choke it down because there aren't any oranges to be had.

The nonsense comparisons just get more nonsensical.  If you like this orange you must like that color orange, you must!  That's the way it's always gone!  Well, I say if you like this orange you must like that porcupine.  See how silly it sounds?  As long as someone sees that fourth object in the set, a connection between the two things and you, they will hard-sell you that the orange and the very-not-orange are fully fungible.

That fourth object in the set, the perceived relation between the other three, gets its power from being invisible and assumed.  The assumption of relations in the set overpowers all the other objects in the set.  If you like this orange you are an orange-ist, because there's (a) you (b) the orange (c) your liking of the orange and (d) anybody that likes that orange is an orange-ist, that's the relation between you and the orange caused by your liking it.  The invisible fourth object in the set, the assumption of a relation, is now a stand-in for you.  You are no longer a person who in one place, in one time, in one way, liked an orange.  You are are an orange-ist.

If you are friends with that guy / read that book, and that guy / book exposed that idea, and that whole other guy with that idea did that thing, then you did that thing!  The four step process of replacing the man with a mannequin is the start of superstition.  Religion is realized in the replacement of the representation for the real.  Hard to believe that belief is so beleaguered but right here on this very planet in this very year there are nations where if you draw the wrong cartoon, read the wrong poem, or question the wrong answer, you go to prison.  Or worse.

Here's how they make the rotten trolly run.  If you said this one thing this one time then you believe - no, you are - this other thing.  A clergyman is not only a clergyman, they are a Good Person.  Good People do Good Deeds, and if the clergyman doesn't do good deeds, or if he does bad deeds, well, he's still a Good Person.  All four stations of Goodnessity are there: the clergyman, the Good Deeds clergymen are associated with, Good Deeds associated with Good People, and halleluia! clergymen are Good People.  And oh my but the four stations of Badnessism are there as well.  If you tell that one joke then you're a Bad Person.  That joke has the Bad Word in it, Bad People use that Bad Word, Bad People do Bad Deeds, so you did a Bad Deed!

It's four things. You, that thing you like, another thing and the proposed connection between the things. That connection is presented as more important than you.  The evidence shows that nothing is more to me than myself.  I'd not be here to tell you if this was not the case.  What other people think and do about me has its influences, but I don't confuse that with right or wrong or especially not Rights and Sins.  Egoism is the school of thought closest to my own, and that association draws from my own luster.

The pressure to be packed in a package deal comes in many forms.  Don't like too many kinds of art or music, be part of a scene.  Don't hold political or philosophical views, be a member of a party or a school.  Don't be online, be in a social network.  And most of all don't have a yen for truth, beauty and strength - be spiritual.

When the crowd crowns you with a trait, you're trapped.  To be identified as a whole by one of your parts is cutting.  Oh you're a massage therapist?  I have this pinch in my back.  You're a car mechanic?  You know, my car is just outside.  You do stand-up?  Tell me a joke, funny guy.  I heard you're a porn star, is that right?  Let's see those tits.  So you're a professional wrestler, eh?  I like that other wrestler better, the nice guy.  In every variation we are made out to be not ourselves but the thing other people think you are.  Man, that dude's a racist.  Heil hitler, you cartoon-drawer!  Her over there, she has a suicidal level of self-hatred and is an active enemy of all women.  She quit her job to be a mom when she was in her 20s.  There's something just creepy about that family down the hall, they're always happy.  Yeah, they're Mormons.  Fake vegan meat supports the aesthetic of carnivore culture.  No one more intolerant than the loud champions of toleration, no one more ready to divide than the unifiers of diversity.

In the United States, a slave knew he had a place: that of a slave.  In India, an Untouchable knew he had a place: that of an Untouchable.  The modern moral minders, starting with Stalin onward, developed a different delineator.  If you are seen to stray too far from the approved set of beliefs, you have no place.  You are to be stripped of your job, your career, your credentials, your home and your money.  The Good Guys in the White Hats are ever vigilant for any infraction.  Call them the improperatzzi.  What a remarkable coincidence that the virtue they advocate is the same as the group they are a member of.

I can't say I judge all men in all moments anew.  I've also decided to not ask you to do so.  That sounds too much like work.  I don't have the time or energy, much less the inclination, to always cast aside generalities, stereotypes, and biases.  In this very essay I may lump a whole spectrum of people I disagree with into the base categories of liars and fools.  But you and I both know some people are just jerks, and some people are solid citizens.  I'm a member of some groups, a friend of others.  Everyone I don't like has me in common.  If it suits me I'll give you a chance, but maybe I'm busy or angry that day and you're just going be hidden behind what I think of you based on some other thing at some other time.  You'll live.  My opinion isn't even all that important to me.

The troubles come when people decide that those who are different aren't to live.  Except for liars and fools, everyone on the planet knows that the Religion of Peace currently holds the title belt for murdering those who think or act differently than they do.  I keep hearing that there's a majority of Muslims who aren't like that, but I also keep not hearing about what they are doing to enlighten their brothers and sisters who keep misunderstanding Islam in the same way, century after century.  Maybe the numbers are there for the majority to reform the minority, but let's see some action.  A sound public shaming is a good start, and in this regard I do my part.  But again - I limit myself to that most pathetic and un-magical of all activities, writing, when I disagree.  The beheaders, the child-rapers, the enslavers, the kidnappers, the hijackers, the perpetually grieved - the Muslims - not so much.

There's no controversy, only a nontroversy.  A man can like music by ADULT. and Mildred Bailey.  A man can know a great deal about far right politics without being of the far right.  A man can be interested in beliefs about UFOs without believing in UFOs.  The scolds and the bullies secretly know this but don't want you in on their game.  They know what is bad for other people because they've seen the evidence - but somehow, they saw the evidence and didn't suffer from the exposure.  They are good enough to tell you what's good for you, but you aren't.  No thank you, you pinch-faced busybodies, I'll decide for myself what I like and do and think and believe.  I'll even take my lumps for the luxury.

The heart wants what the heart wants.  So does the groin.  I've made up a name for those who think otherwise: quantisexual.  A quantisexual is deeply invested in quantifying sex.  Who can have sex with who, what the arrangement is named, who shares that name and who doesn't.  Who is doing it right, who is doing it right but for the wrong reasons, who is doing it all wrong.  Not satisfied with the real-life cooties you can get from sex, a quantisexual invents forms of ritual contamination and cleanliness.  If you have even one stray thought about your own sex, you're bisexual.  If you're bisexual then you're queer.  If you're queer then you have to support all the other queers in all their queeriosities.  Even if you don't have sex at all there's a whole slew of cooties you can accessorize yourself with like 'cis' and 'demisexual' and 'asexual.'  The name for a thing becomes more important than the thing itself, like sheets being more sexy than what goes on between them.  The alphabet soup of alt-sex has more rules and restrictions than the Roman Catholic Church.  Quantisexuality is a fetish.  Hip hip hooray if you were born that way or if, by pretending it's your thing, you get to join the right in-groups.  Sex will go on without your names for it.

Standing at the rich banquet of life, far too many go with a cuisine they've been gifted by someone not even alive to share the meal.  Only these foods go together, and only in this order, and in this amount.  Not because to do otherwise leads to sickness or death, but because, well, other people might... see...  See what?  Me getting a few of these and a few of those, concerned less than they, enjoying more than they.  You do go on if you must keep kosher, hold halal and avoid fish on Friday.  All the more for me, pal, or maybe I'll just have a bite and be done.  What we do and like isn't limited to one item from column A and two items from column B.  Life is not a family meal or a package deal.  Beliefs and interests are all a big mess and probably not very important, so pull them together in a way that makes sense to you.  Just don't insist I sign on to your supper club.

The thing you like is the thing you like.  You didn't used to like it, and maybe you won't like it later.  You don't have to explain or understand it.  You don't have to get my approval for it.  If it stops working for you, you stop working for it.  Move on, and I'll be doing the same.

- Trevor Blake is the author of Confessions of a Failed Egoist.

[POLITICS] Jihadism and a new kind of existential threat

-5 MrMind 25 March 2015 09:37AM

Politics is the mind-killer. Politics IS really the mind-killer. Please meditate on this until politics flows over you like butter on hot teflon, and your neurons stops fibrillating and resume their normal operations.


I've always found silly that LW, one of the best and most focused group of rationalists on the web isn't able to talk evenly about politics. It's true that we are still human, but can't we just make an effort at being calm and level-headed? I think we can. Does gradual exposure works on group, too? Maybe a little bit of effort combined with a little bit of exposure will work as a vaccine.
And maybe tomorrow a beautiful naked valkyrie will bring me to utopia on her flying unicorn...
Anyway, I want to try. Let's see what happens.


Two recent events has prompted me to make this post: I'm reading "The rise of the Islamic State" by Patrick Coburn, which I think does a good job in presenting fairly the very recent history surrounding ISIS, and the terrorist attack in Tunis by the same group, which resulted in 18 foreigners killed.
I believe that their presence in the region is now definitive: they control an area that is wider than Great Britain, with a population tallying over six millions, not counting the territories controlled by affiliate group like Boko Haram. Their influence is also expanding, and the attack in Tunis shows that this entity is not going to stay confined between the borders of Syria and Iraq.
It may well be the case that in the next ten years or so, this will be an international entity which will bring ideas and mores predating the Middle Age back on the Mediterranean Sea.

A new kind of existential threat

To a mildly rational person, the conflict fueling the rise of the Islamic State, namely the doctrinal differences between Sunni and Shia Islam, is the worst kind of Blue/Green division. A separation that causes hundreds of billions of dollars (read that again) to be wasted trying kill each other. But here it is, and the world must deal with it.
In comparison, Democrats and Republicans are so close that they could be mistaken for Aumann agreeing.
I fear that ISIS is bringing a new kind of existential threat: one where is not the existence of humankind at risks, but the existence of the idea of rationality.
The funny thing is that while people can be extremely irrational, they can still work on technology to discover new things. Fundamentalism has never stopped a country to achieve technological progress: think about the wonderful skyscrapers and green patches in the desert of the Arab Emirates or the nuclear weapons of Pakistan. So it might well be the case that in the future some scientist will start a seed AI believing that Allah will guide it to evolve in the best way. But it also might be that in the future, African, Asian and maybe European (gasp!) rationalists will be hunted down and killed like rats.
It might be the very meme of rationality to be erased from existence.


I'll close with a bunch of questions, both strictly and loosely related. Mainly, I'm asking you to refrain from proposing a solution. Let's assess the situation first.

  • Do you think that the Islamic State is an entity which will vanish in the future or not?
  • Do you think that their particularly violent brand of jihadism is a worse menace to the sanity waterline than say, other kind of religious movements, past or present?
  • Do you buy the idea that fundamentalism can be coupled with technological advancement, so that the future will presents us with Islamic AI's?
  • Do you think that the very same idea of rationality can be the subject of existential risk?
  • What do Neoreactionaries think of the Islamic State? After all, it's an exemplar case of the reactionaries in those areas winning big. I know it's only a surface comparison, I'm sincerely curious about what a NR think of the situation.

Live long and prosper.

View more: Next