Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to Applause Lights
Comment author: Nisan 14 April 2015 05:28:22AM 1 point [-]

"I am here to propose to you today that we need to balance the risks and opportunities of advanced Artificial Intelligence..."

Seven years later, this open letter was signed by leaders of the field. It's amusing how similar it is to the above speech, especially considering how it actually marked a major milestone in the advancement of the field of AI safety.

Comment author: Evan_Gaensbauer 23 February 2015 02:58:00PM *  16 points [-]

I'm drafting a post for Discussion about how users on LessWrong who feel disconnected from the rationalist community can get involved and make friends and stuff.

What I've got so far: *Where everybody went away from LessWrong, and why *How you can keep up with great content/news/developments in rationality on sites other than LessWrong *Get involved by going to meetups, and using the LW Study Hall

What I'm looking for:

  1. A post I can link to about why the LW Study Hall is great.

  2. Testimonials about how attending a meetup transformed social or intellectual life for you. I know this is the case in the Bay Area, and I know life became much richer for some friends e.g., I have in Vancouver or Seattle.

  3. A repository of ideas for meetups, and other socializing, if somebody planning or starting a meetup can't think of anything to do.

  4. How to become friends and integrate socially with other rationalists/LWers. A rationalist from Toronto visited Vancouver, noticed we were all friends, and was asking us how we became all friends, rather than a circle of individuals who share intellectual interests, but not much else. The only suggestions we could think of were:

Be friends with a couple people from the meetup for years before, and hang out with everyone else for 2 years until it stops being awkward.

and

If you can get a 'rationalist' house with roommates from your LW meetup, you can force yourselves to rapidly become friends.

These are bad or impractical suggestions. If you have better ones to share, that'd be fantastic.

Please add suggestions for the numbered list. If relevant resources don't exist, notify me, and I/we/somebody can make them. If you think I'm missing something else, please let me know.

Comment author: Nisan 24 February 2015 05:22:11AM 1 point [-]

Kaj Sotala wrote a pdf called "How to run a Less Wrong meetup" or something like that.

Comment author: Metus 15 December 2014 01:17:31AM 1 point [-]

Interesting answer. Seeing as my personal giving is completely out of pleasure not some kind of moral obligation, the argument for diversification is very strong.

Comment author: Nisan 15 December 2014 01:27:17AM 3 points [-]

Ah. Well, then there doesn't seem to be anything to debate here. If you want to do what makes you happy, then do what makes you happy.

Comment author: Metus 15 December 2014 12:25:12AM *  2 points [-]

I want to open up the debate again whether to split donations or to concentrate them in one place.

One camp insists on donating all your money to a single charity with the highest current marginal effectiveness. The other camp claims that you should split donations for various reasons ranging from concerns like "if everyone thought like this" to "don't put all your eggs in one basket." My position is firmly in the second camp as it seems to me obvious that you should split your donations just as you split your investments, because of risk.

But it is not obvious at all. If a utility function is concave risk aversion arises completely naturally and with it all the associated theory of how to avoid unnecessary risk. Utilitarians however seem to consider it natural that the moral utility function is completely linear in the number of people or QALYs or any other measure of human well-being. Is there any theoretical reason risk-aversion can arise if a utility function is completely linear in the way described before?

In the same vein, there seems to be no theoretical reason for having time preference in a certain world. So if we agree that we should invest our donations and donate them later it seems like there is no reason to actually donate them at any time since at any such time we could follow the same reasoning and push the donation even further. Is the conlcusion then to either donate now or not at all? Or should the answer be way more complicated involving average and local economic growth and thus the impact of money donated now or later?

Let the perfect not be the enemy of the good, but this rabbit hole seems to go deeper and deeper.

Comment author: Nisan 15 December 2014 01:22:14AM 3 points [-]

I believe donating to the best charity is essentially correct, for the reason you state. You won't find much disagreement on Less Wrong or from GiveWell. Whether that's obvious or not is a matter of opinion, I suppose. Note that in GiveWell's latest top charity recommendations, they suggest splitting one's donation among the very best charities for contingent reasons not having to do with risk aversion.

If you had some kind of donor advised fund that could grow to produce an arbitrarily large amount of good given enough time, that would present a conundrum. It would be exactly the same conundrum as the following puzzle: Suppose you can say a number, and get that much money; which number do you say? In practice, however, our choices are limited. The rule against perpetuities prevents you from donating long after your lifetime; and opportunities to do good with your money may dry up faster than your money grows. Holden Karnofsky has some somewhat more practical considerations.

Comment author: Nisan 22 October 2014 05:59:05PM 1 point [-]

Claim: There are some deals you should make but can't.

Comment author: curiousepic 15 July 2014 06:50:21PM 4 points [-]

I'm an EA and interested in signing up for cryonics. After cryocrastinating for a few years (ok I guess I don't get to say "after" until I actually sign up), I've realized that I should definitely sign up for life insurance, because of the ability to change the beneficiary. I place a low probability on cryonics working right now, but I can claim a charity or a Donor Advised Fund as the beneficiary until I place a sufficient probability on suspension technology working. In the future, I can change it back if I change my mind, etc.

Any issues that might come into this? If no one sees any flaws, I'm committing to sign up for life insurance with this plan in mind by or during the next open thread, and making a more prominent post about this strategy for any EA+Cryonics people.

Comment author: Nisan 17 July 2014 01:14:36AM 0 points [-]

It might be worth looking into which life insurance companies are friendly to cryonics.

Comment author: Nisan 20 June 2014 05:22:26PM 7 points [-]

There's an interesting parallel with Modal Combat. Both approaches want to express the idea that "moral agents are those that cooperate with moral agents". Modal Combat resolves the circularity with diagonalization, and Eigenmorality resolves it by finding a stable distribution.

Comment author: Qiaochu_Yuan 20 June 2014 03:14:18AM 2 points [-]

This thread was prompted by this comment in the Open Thread.

Comment author: Nisan 20 June 2014 04:57:26PM 3 points [-]

That comment is about utilitarianism and doesn't mention "utility functions" at all.

Comment author: ShardPhoenix 20 May 2014 12:12:40AM 0 points [-]

Shouldn't AIXI include itself (for all inputs) recursively? If so I don't think your sequence is well defined.

Comment author: Nisan 20 May 2014 01:54:55AM 4 points [-]

No, AIXI isn't computable and so does not include itself as a hypothesis.

Comment author: Viliam_Bur 19 May 2014 06:27:33PM *  1 point [-]

the maximally unexpected sequence will be random

In a random sequence, AIXI would guess on average half of the bits. My goal was to create a specific sequence, where it couldn't guess any. Not just a random sequence, but specifically... uhm... "anti-inductive"? The exact opposite of lawful, where random is merely halfway opposed. I don't care about other possible predictors, only about AIXI.

Imagine playing rock-paper-scissors against someone who beats you all the time, whatever you do. That's worse than random. This sequence would bring the mighty AIXI to tears... but I suspect to a human observer it would merely seem pseudo-random. And is probably not very useful for other goals than making fun of AIXI.

Comment author: Nisan 20 May 2014 01:53:57AM *  3 points [-]

Ok. I still think the sequence is random in the algorithmic information theory sense; i.e., it's incompressible. But I understand you're interested in the adversarial aspect of the scenario.

You only need a halting oracle to compute your adversarial sequence (because that's what it takes to run AIXI). A super-Solomonoff inductor that inducts over all Turing machines with access to halting oracles would be able to learn the sequence, I think. The adversarial sequence for that inductor would require a higher oracle to compute, and so on up the ordinal hierarchy.

View more: Next