Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ahbwramc 04 May 2015 09:37:21PM 0 points [-]

Sure, I understand the identity now of course (or at least I have more of an understanding of it). All I meant was that if you're introduced to Euler's identity at a time when exponentiation just means "multiply this number by itself some number of times", then it's probably going to seem really odd to you. How exactly does one multiply 2.718 by itself sqrt(-1)*3.14 times?

Comment author: Nisan 06 May 2015 02:39:01AM 0 points [-]

You simply measure out a length such that, if you drew a square that many meters on a side, and also drew a square 3.1415 meters on a side, they would enclose no area between the two of them. Then evenly divide this length into meters, and for each meter write down 2.7183. Now multiply those numbers together, and you'll find they make -1. Easy!

Comment author: Nisan 06 May 2015 02:37:18AM 22 points [-]

Scott: I am bad at math.

Jonah: You are good at math.

Scott: No, I really am bad at math.

Jonah: No, you really are good at math.

Nisan: Esteemed colleagues, it is no use! If you continue this exchange, Scott will continue to believe they are bad at math, and Jonah will continue to disagree — forever!

Scott: Thank you for the information, but I still believe I am bad at math.

Jonah: And I still believe Scott is good at math.

Scott: And I still believe I am bad at math.

Nisan: Esteemed colleagues, give it up! Even if you persist in this exchange, neither of you will change your stated beliefs. In fact, I could truthfully repeat my previous sentence a hundred times (including the first time), and Scott would still believe they are bad at math, and Jonah would still disagree.

Scott: That's good to know, but for better or for worse, I still believe I am bad at math.

Jonah: And I still believe Scott is good at math.

Scott: Ah, but now I realize I am good at math after all!

Jonah: I agree, and what's more, I now know exactly how good at math Scott is!

Scott: And now I know that as well.

In response to Applause Lights
Comment author: Nisan 14 April 2015 05:28:22AM 1 point [-]

"I am here to propose to you today that we need to balance the risks and opportunities of advanced Artificial Intelligence..."

Seven years later, this open letter was signed by leaders of the field. It's amusing how similar it is to the above speech, especially considering how it actually marked a major milestone in the advancement of the field of AI safety.

Comment author: Evan_Gaensbauer 23 February 2015 02:58:00PM *  16 points [-]

I'm drafting a post for Discussion about how users on LessWrong who feel disconnected from the rationalist community can get involved and make friends and stuff.

What I've got so far: *Where everybody went away from LessWrong, and why *How you can keep up with great content/news/developments in rationality on sites other than LessWrong *Get involved by going to meetups, and using the LW Study Hall

What I'm looking for:

  1. A post I can link to about why the LW Study Hall is great.

  2. Testimonials about how attending a meetup transformed social or intellectual life for you. I know this is the case in the Bay Area, and I know life became much richer for some friends e.g., I have in Vancouver or Seattle.

  3. A repository of ideas for meetups, and other socializing, if somebody planning or starting a meetup can't think of anything to do.

  4. How to become friends and integrate socially with other rationalists/LWers. A rationalist from Toronto visited Vancouver, noticed we were all friends, and was asking us how we became all friends, rather than a circle of individuals who share intellectual interests, but not much else. The only suggestions we could think of were:

Be friends with a couple people from the meetup for years before, and hang out with everyone else for 2 years until it stops being awkward.

and

If you can get a 'rationalist' house with roommates from your LW meetup, you can force yourselves to rapidly become friends.

These are bad or impractical suggestions. If you have better ones to share, that'd be fantastic.

Please add suggestions for the numbered list. If relevant resources don't exist, notify me, and I/we/somebody can make them. If you think I'm missing something else, please let me know.

Comment author: Nisan 24 February 2015 05:22:11AM 1 point [-]

Kaj Sotala wrote a pdf called "How to run a Less Wrong meetup" or something like that.

Comment author: Metus 15 December 2014 01:17:31AM 1 point [-]

Interesting answer. Seeing as my personal giving is completely out of pleasure not some kind of moral obligation, the argument for diversification is very strong.

Comment author: Nisan 15 December 2014 01:27:17AM 3 points [-]

Ah. Well, then there doesn't seem to be anything to debate here. If you want to do what makes you happy, then do what makes you happy.

Comment author: Metus 15 December 2014 12:25:12AM *  2 points [-]

I want to open up the debate again whether to split donations or to concentrate them in one place.

One camp insists on donating all your money to a single charity with the highest current marginal effectiveness. The other camp claims that you should split donations for various reasons ranging from concerns like "if everyone thought like this" to "don't put all your eggs in one basket." My position is firmly in the second camp as it seems to me obvious that you should split your donations just as you split your investments, because of risk.

But it is not obvious at all. If a utility function is concave risk aversion arises completely naturally and with it all the associated theory of how to avoid unnecessary risk. Utilitarians however seem to consider it natural that the moral utility function is completely linear in the number of people or QALYs or any other measure of human well-being. Is there any theoretical reason risk-aversion can arise if a utility function is completely linear in the way described before?

In the same vein, there seems to be no theoretical reason for having time preference in a certain world. So if we agree that we should invest our donations and donate them later it seems like there is no reason to actually donate them at any time since at any such time we could follow the same reasoning and push the donation even further. Is the conlcusion then to either donate now or not at all? Or should the answer be way more complicated involving average and local economic growth and thus the impact of money donated now or later?

Let the perfect not be the enemy of the good, but this rabbit hole seems to go deeper and deeper.

Comment author: Nisan 15 December 2014 01:22:14AM 3 points [-]

I believe donating to the best charity is essentially correct, for the reason you state. You won't find much disagreement on Less Wrong or from GiveWell. Whether that's obvious or not is a matter of opinion, I suppose. Note that in GiveWell's latest top charity recommendations, they suggest splitting one's donation among the very best charities for contingent reasons not having to do with risk aversion.

If you had some kind of donor advised fund that could grow to produce an arbitrarily large amount of good given enough time, that would present a conundrum. It would be exactly the same conundrum as the following puzzle: Suppose you can say a number, and get that much money; which number do you say? In practice, however, our choices are limited. The rule against perpetuities prevents you from donating long after your lifetime; and opportunities to do good with your money may dry up faster than your money grows. Holden Karnofsky has some somewhat more practical considerations.

Comment author: Nisan 22 October 2014 05:59:05PM 0 points [-]

Claim: There are some deals you should make but can't.

Comment author: curiousepic 15 July 2014 06:50:21PM 4 points [-]

I'm an EA and interested in signing up for cryonics. After cryocrastinating for a few years (ok I guess I don't get to say "after" until I actually sign up), I've realized that I should definitely sign up for life insurance, because of the ability to change the beneficiary. I place a low probability on cryonics working right now, but I can claim a charity or a Donor Advised Fund as the beneficiary until I place a sufficient probability on suspension technology working. In the future, I can change it back if I change my mind, etc.

Any issues that might come into this? If no one sees any flaws, I'm committing to sign up for life insurance with this plan in mind by or during the next open thread, and making a more prominent post about this strategy for any EA+Cryonics people.

Comment author: Nisan 17 July 2014 01:14:36AM 0 points [-]

It might be worth looking into which life insurance companies are friendly to cryonics.

Comment author: Nisan 20 June 2014 05:22:26PM 7 points [-]

There's an interesting parallel with Modal Combat. Both approaches want to express the idea that "moral agents are those that cooperate with moral agents". Modal Combat resolves the circularity with diagonalization, and Eigenmorality resolves it by finding a stable distribution.

Comment author: Qiaochu_Yuan 20 June 2014 03:14:18AM 2 points [-]

This thread was prompted by this comment in the Open Thread.

Comment author: Nisan 20 June 2014 04:57:26PM 3 points [-]

That comment is about utilitarianism and doesn't mention "utility functions" at all.

View more: Next