Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Nisan 25 December 2016 04:48:11AM 3 points [-]

It's a curious refutation. The author says that the people who are concerned about superintelligence are very smart, the top of the industry. They give many counterarguments, most of which can be easily refuted. It's as if they wanted to make people more concerned about superintelligence, while claiming to argue the opposite. And then they link directly to MIRI's donation page.

Comment author: Nisan 11 December 2016 09:43:51PM 2 points [-]

Maybe you'll cover this in a future post, but I'm curious about the outcomes of CFAR's past AI-specific workshops, especially "CFAR for ML Researchers" and the "Workshop on AI Safety Strategy".

Comment author: IlyaShpitser 17 January 2016 05:32:17AM 3 points [-]

Imagine an undirected graph where each node has a left and right neighbor (so it's an infinitely long chain). You are on a node in this graph, and somewhere to the left or right of you is a hotel (50/50 chance to be in either direction). You don't know how far -- k steps for an arbitrarily large k that an adversary picks after learning how you will look for a hotel.

The solution that takes 1 step left, 2 steps right, 3 steps left, etc. will find the hotel in O(k^2) steps. Is it possible to do better?

Comment author: Nisan 17 January 2016 08:10:25AM 4 points [-]

I can get O(k).

Comment author: Nisan 07 July 2015 05:04:53AM 2 points [-]

How much karma does one need to make a top-level post or meetup announcement? A new user wants to announce a new meetup in Kiev. If you want to help them out, you can upvote their comment.

Comment author: ahbwramc 04 May 2015 09:37:21PM 0 points [-]

Sure, I understand the identity now of course (or at least I have more of an understanding of it). All I meant was that if you're introduced to Euler's identity at a time when exponentiation just means "multiply this number by itself some number of times", then it's probably going to seem really odd to you. How exactly does one multiply 2.718 by itself sqrt(-1)*3.14 times?

Comment author: Nisan 06 May 2015 02:39:01AM 0 points [-]

You simply measure out a length such that, if you drew a square that many meters on a side, and also drew a square 3.1415 meters on a side, they would enclose no area between the two of them. Then evenly divide this length into meters, and for each meter write down 2.7183. Now multiply those numbers together, and you'll find they make -1. Easy!

Comment author: Nisan 06 May 2015 02:37:18AM 22 points [-]

Scott: I am bad at math.

Jonah: You are good at math.

Scott: No, I really am bad at math.

Jonah: No, you really are good at math.

Nisan: Esteemed colleagues, it is no use! If you continue this exchange, Scott will continue to believe they are bad at math, and Jonah will continue to disagree — forever!

Scott: Thank you for the information, but I still believe I am bad at math.

Jonah: And I still believe Scott is good at math.

Scott: And I still believe I am bad at math.

Nisan: Esteemed colleagues, give it up! Even if you persist in this exchange, neither of you will change your stated beliefs. In fact, I could truthfully repeat my previous sentence a hundred times (including the first time), and Scott would still believe they are bad at math, and Jonah would still disagree.

Scott: That's good to know, but for better or for worse, I still believe I am bad at math.

Jonah: And I still believe Scott is good at math.

Scott: Ah, but now I realize I am good at math after all!

Jonah: I agree, and what's more, I now know exactly how good at math Scott is!

Scott: And now I know that as well.

In response to Applause Lights
Comment author: Nisan 14 April 2015 05:28:22AM 2 points [-]

"I am here to propose to you today that we need to balance the risks and opportunities of advanced Artificial Intelligence..."

Seven years later, this open letter was signed by leaders of the field. It's amusing how similar it is to the above speech, especially considering how it actually marked a major milestone in the advancement of the field of AI safety.

Comment author: Evan_Gaensbauer 23 February 2015 02:58:00PM *  16 points [-]

I'm drafting a post for Discussion about how users on LessWrong who feel disconnected from the rationalist community can get involved and make friends and stuff.

What I've got so far: *Where everybody went away from LessWrong, and why *How you can keep up with great content/news/developments in rationality on sites other than LessWrong *Get involved by going to meetups, and using the LW Study Hall

What I'm looking for:

  1. A post I can link to about why the LW Study Hall is great.

  2. Testimonials about how attending a meetup transformed social or intellectual life for you. I know this is the case in the Bay Area, and I know life became much richer for some friends e.g., I have in Vancouver or Seattle.

  3. A repository of ideas for meetups, and other socializing, if somebody planning or starting a meetup can't think of anything to do.

  4. How to become friends and integrate socially with other rationalists/LWers. A rationalist from Toronto visited Vancouver, noticed we were all friends, and was asking us how we became all friends, rather than a circle of individuals who share intellectual interests, but not much else. The only suggestions we could think of were:

Be friends with a couple people from the meetup for years before, and hang out with everyone else for 2 years until it stops being awkward.


If you can get a 'rationalist' house with roommates from your LW meetup, you can force yourselves to rapidly become friends.

These are bad or impractical suggestions. If you have better ones to share, that'd be fantastic.

Please add suggestions for the numbered list. If relevant resources don't exist, notify me, and I/we/somebody can make them. If you think I'm missing something else, please let me know.

Comment author: Nisan 24 February 2015 05:22:11AM 1 point [-]

Kaj Sotala wrote a pdf called "How to run a Less Wrong meetup" or something like that.

Comment author: Metus 15 December 2014 01:17:31AM 1 point [-]

Interesting answer. Seeing as my personal giving is completely out of pleasure not some kind of moral obligation, the argument for diversification is very strong.

Comment author: Nisan 15 December 2014 01:27:17AM 3 points [-]

Ah. Well, then there doesn't seem to be anything to debate here. If you want to do what makes you happy, then do what makes you happy.

Comment author: Metus 15 December 2014 12:25:12AM *  2 points [-]

I want to open up the debate again whether to split donations or to concentrate them in one place.

One camp insists on donating all your money to a single charity with the highest current marginal effectiveness. The other camp claims that you should split donations for various reasons ranging from concerns like "if everyone thought like this" to "don't put all your eggs in one basket." My position is firmly in the second camp as it seems to me obvious that you should split your donations just as you split your investments, because of risk.

But it is not obvious at all. If a utility function is concave risk aversion arises completely naturally and with it all the associated theory of how to avoid unnecessary risk. Utilitarians however seem to consider it natural that the moral utility function is completely linear in the number of people or QALYs or any other measure of human well-being. Is there any theoretical reason risk-aversion can arise if a utility function is completely linear in the way described before?

In the same vein, there seems to be no theoretical reason for having time preference in a certain world. So if we agree that we should invest our donations and donate them later it seems like there is no reason to actually donate them at any time since at any such time we could follow the same reasoning and push the donation even further. Is the conlcusion then to either donate now or not at all? Or should the answer be way more complicated involving average and local economic growth and thus the impact of money donated now or later?

Let the perfect not be the enemy of the good, but this rabbit hole seems to go deeper and deeper.

Comment author: Nisan 15 December 2014 01:22:14AM 3 points [-]

I believe donating to the best charity is essentially correct, for the reason you state. You won't find much disagreement on Less Wrong or from GiveWell. Whether that's obvious or not is a matter of opinion, I suppose. Note that in GiveWell's latest top charity recommendations, they suggest splitting one's donation among the very best charities for contingent reasons not having to do with risk aversion.

If you had some kind of donor advised fund that could grow to produce an arbitrarily large amount of good given enough time, that would present a conundrum. It would be exactly the same conundrum as the following puzzle: Suppose you can say a number, and get that much money; which number do you say? In practice, however, our choices are limited. The rule against perpetuities prevents you from donating long after your lifetime; and opportunities to do good with your money may dry up faster than your money grows. Holden Karnofsky has some somewhat more practical considerations.

View more: Next