Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Evan_Gaensbauer 23 February 2015 02:58:00PM *  16 points [-]

I'm drafting a post for Discussion about how users on LessWrong who feel disconnected from the rationalist community can get involved and make friends and stuff.

What I've got so far: *Where everybody went away from LessWrong, and why *How you can keep up with great content/news/developments in rationality on sites other than LessWrong *Get involved by going to meetups, and using the LW Study Hall

What I'm looking for:

  1. A post I can link to about why the LW Study Hall is great.

  2. Testimonials about how attending a meetup transformed social or intellectual life for you. I know this is the case in the Bay Area, and I know life became much richer for some friends e.g., I have in Vancouver or Seattle.

  3. A repository of ideas for meetups, and other socializing, if somebody planning or starting a meetup can't think of anything to do.

  4. How to become friends and integrate socially with other rationalists/LWers. A rationalist from Toronto visited Vancouver, noticed we were all friends, and was asking us how we became all friends, rather than a circle of individuals who share intellectual interests, but not much else. The only suggestions we could think of were:

Be friends with a couple people from the meetup for years before, and hang out with everyone else for 2 years until it stops being awkward.

and

If you can get a 'rationalist' house with roommates from your LW meetup, you can force yourselves to rapidly become friends.

These are bad or impractical suggestions. If you have better ones to share, that'd be fantastic.

Please add suggestions for the numbered list. If relevant resources don't exist, notify me, and I/we/somebody can make them. If you think I'm missing something else, please let me know.

Comment author: Nisan 24 February 2015 05:22:11AM 1 point [-]

Kaj Sotala wrote a pdf called "How to run a Less Wrong meetup" or something like that.

Comment author: Metus 15 December 2014 01:17:31AM 1 point [-]

Interesting answer. Seeing as my personal giving is completely out of pleasure not some kind of moral obligation, the argument for diversification is very strong.

Comment author: Nisan 15 December 2014 01:27:17AM 3 points [-]

Ah. Well, then there doesn't seem to be anything to debate here. If you want to do what makes you happy, then do what makes you happy.

Comment author: Metus 15 December 2014 12:25:12AM *  2 points [-]

I want to open up the debate again whether to split donations or to concentrate them in one place.

One camp insists on donating all your money to a single charity with the highest current marginal effectiveness. The other camp claims that you should split donations for various reasons ranging from concerns like "if everyone thought like this" to "don't put all your eggs in one basket." My position is firmly in the second camp as it seems to me obvious that you should split your donations just as you split your investments, because of risk.

But it is not obvious at all. If a utility function is concave risk aversion arises completely naturally and with it all the associated theory of how to avoid unnecessary risk. Utilitarians however seem to consider it natural that the moral utility function is completely linear in the number of people or QALYs or any other measure of human well-being. Is there any theoretical reason risk-aversion can arise if a utility function is completely linear in the way described before?

In the same vein, there seems to be no theoretical reason for having time preference in a certain world. So if we agree that we should invest our donations and donate them later it seems like there is no reason to actually donate them at any time since at any such time we could follow the same reasoning and push the donation even further. Is the conlcusion then to either donate now or not at all? Or should the answer be way more complicated involving average and local economic growth and thus the impact of money donated now or later?

Let the perfect not be the enemy of the good, but this rabbit hole seems to go deeper and deeper.

Comment author: Nisan 15 December 2014 01:22:14AM 3 points [-]

I believe donating to the best charity is essentially correct, for the reason you state. You won't find much disagreement on Less Wrong or from GiveWell. Whether that's obvious or not is a matter of opinion, I suppose. Note that in GiveWell's latest top charity recommendations, they suggest splitting one's donation among the very best charities for contingent reasons not having to do with risk aversion.

If you had some kind of donor advised fund that could grow to produce an arbitrarily large amount of good given enough time, that would present a conundrum. It would be exactly the same conundrum as the following puzzle: Suppose you can say a number, and get that much money; which number do you say? In practice, however, our choices are limited. The rule against perpetuities prevents you from donating long after your lifetime; and opportunities to do good with your money may dry up faster than your money grows. Holden Karnofsky has some somewhat more practical considerations.

Comment author: Nisan 22 October 2014 05:59:05PM 1 point [-]

Claim: There are some deals you should make but can't.

Comment author: curiousepic 15 July 2014 06:50:21PM 4 points [-]

I'm an EA and interested in signing up for cryonics. After cryocrastinating for a few years (ok I guess I don't get to say "after" until I actually sign up), I've realized that I should definitely sign up for life insurance, because of the ability to change the beneficiary. I place a low probability on cryonics working right now, but I can claim a charity or a Donor Advised Fund as the beneficiary until I place a sufficient probability on suspension technology working. In the future, I can change it back if I change my mind, etc.

Any issues that might come into this? If no one sees any flaws, I'm committing to sign up for life insurance with this plan in mind by or during the next open thread, and making a more prominent post about this strategy for any EA+Cryonics people.

Comment author: Nisan 17 July 2014 01:14:36AM 0 points [-]

It might be worth looking into which life insurance companies are friendly to cryonics.

Comment author: Nisan 20 June 2014 05:22:26PM 7 points [-]

There's an interesting parallel with Modal Combat. Both approaches want to express the idea that "moral agents are those that cooperate with moral agents". Modal Combat resolves the circularity with diagonalization, and Eigenmorality resolves it by finding a stable distribution.

Comment author: Qiaochu_Yuan 20 June 2014 03:14:18AM 2 points [-]

This thread was prompted by this comment in the Open Thread.

Comment author: Nisan 20 June 2014 04:57:26PM 3 points [-]

That comment is about utilitarianism and doesn't mention "utility functions" at all.

Comment author: ShardPhoenix 20 May 2014 12:12:40AM 0 points [-]

Shouldn't AIXI include itself (for all inputs) recursively? If so I don't think your sequence is well defined.

Comment author: Nisan 20 May 2014 01:54:55AM 4 points [-]

No, AIXI isn't computable and so does not include itself as a hypothesis.

Comment author: Viliam_Bur 19 May 2014 06:27:33PM *  1 point [-]

the maximally unexpected sequence will be random

In a random sequence, AIXI would guess on average half of the bits. My goal was to create a specific sequence, where it couldn't guess any. Not just a random sequence, but specifically... uhm... "anti-inductive"? The exact opposite of lawful, where random is merely halfway opposed. I don't care about other possible predictors, only about AIXI.

Imagine playing rock-paper-scissors against someone who beats you all the time, whatever you do. That's worse than random. This sequence would bring the mighty AIXI to tears... but I suspect to a human observer it would merely seem pseudo-random. And is probably not very useful for other goals than making fun of AIXI.

Comment author: Nisan 20 May 2014 01:53:57AM *  3 points [-]

Ok. I still think the sequence is random in the algorithmic information theory sense; i.e., it's incompressible. But I understand you're interested in the adversarial aspect of the scenario.

You only need a halting oracle to compute your adversarial sequence (because that's what it takes to run AIXI). A super-Solomonoff inductor that inducts over all Turing machines with access to halting oracles would be able to learn the sequence, I think. The adversarial sequence for that inductor would require a higher oracle to compute, and so on up the ordinal hierarchy.

Comment author: Viliam_Bur 19 May 2014 10:02:45AM *  3 points [-]

I have a random mathematical idea, not sure what it means, whether it is somehow useful, or whether anyone has explored this before. So I guess I'll just write it here.

Imagine the most unexpected sequence of bits. What would it look like? Well, probably not what you'd expect, by definition, right? But let's be more specific.

By "expecting" I mean this: You have a prediction machine, similar to AIXI. You show the first N bits of the sequence to the machine, and the machine tries to predict the following bit. And the most unexpected sequence is one where the machine makes the most guesses wrong; preferably all of them.

More precisely: The prediction machine starts with imagining all possible algorithms that could generate sequences of bits, and it assigns them probability according to the Solomonoff prior. (Which is impossible to do in real life, because of the infinities involved, etc.) Then it receives the first N bits of the sequence, and removes all algorithms which would not generate a sequence starting with these N bits. Now it normalizes the probabilities of the remaining algorithms, and lets them vote on whether the next bit would be 0 or 1.

However, our sequence is generated in defiance to the prediction machine. We actualy don't have any sequence in advance. We just ask the prediction machine what is the next bit (starting with the empty initial sequence), and then do the exact opposite. (There is some analogy with Cantor's diagonal proof.) Then we send the sequence with this new bit to the machine, ask it to predict the next bit, and again do the opposite. Etc.

There is this technical detail, that the prediction machine may answer "I don't know" if exactly half of the remaining algorithms predict that the next bit will be 0, and other half predicts that it will be 1. Let's say that if we receive this specific answer, we will always add 0 to the end of the sequence. (But if the machine thinks it's 0 with probability 50.000001%, and 1 with probability 49.999999%, it will output "0", and we will add 1 to the end of the sequence.)

So... at the beginning, there is no way to predict the first bit, so the machine says "I don't know" and the first bit is 0. At that moment, the prediction of the following bit is 0 (because the "only 0's" hypothesis is very simple), so the first two bits are 01. I am not sure here, but my next prediction (though I am predicting this with naive human reasoning, no math) would be 0 (as in "010101..."), so the first three bits are 011. -- And I don't dare to speculate about the following bits.

The exact sequence depends on how exactly the prediction machine defines the "algorithms that generate the sequence of bits" (the technical details of the language these algorithms are written in), but can still something be said about these "most unexpected" sequences in general? My guess is that to a human observer they would seem like a random noise. -- Which contradicts my initial words that the sequence would not be what you'd expect... but I guess the answer is that the generation process is trying to surprise the prediction machine, not me as a human.

Comment author: Nisan 19 May 2014 03:53:19PM 2 points [-]

In order to capture your intuition that a random sequence is "unsurprising", you want the predictor to output a distribution over {0,1} — or equivalently, a subjective probability p of the next bit being 1. The predictor tries to maximize the expectation of a proper scoring rule. In that case, the maximally unexpected sequence will be random, and the probability of the sequence will approach 2^{-n}.

Allowing the predictor to output {0, 1, ?} is kind of like restricting its outputs to {0%, 50%, 100%}.

View more: Next